evaluate_tournament
- negmas.tournaments.evaluate_tournament(tournament_path, scores=None, stats=None, world_stats=None, type_stats=None, agent_stats=None, metric='mean', verbose=False, recursive=True, extra_scores_to_use=None, compile=True)[source]
Evaluates the results of a tournament
- Parameters:
tournament_path (
str
|Path
|None
) – Path to save the results to. If scores is not given, it is also used as the source of scores. Pass None to avoid saving the results to disk.scores (
DataFrame
|None
) – Optionally the scores of all agents in all world runs. If not given they will be read from the file scores.csv intournament_path
stats (
DataFrame
|None
) – Optionally the stats of all world runs. If not given they will be read from the file stats.csv intournament_path
world_stats (
DataFrame
|None
) – Optionally the aggregate stats collected inWorldSetRunStats
for each world settype_stats (
DataFrame
|None
) – Optionally the aggregate stats collected inAgentStats
for each agent typeagent_stats (
DataFrame
|None
) – Optionally the aggregate stats collected inAgentStats
for each agent instancemetric (
Union
[str
,Callable
[[DataFrame
],float
]]) – The metric used for evaluation. Possibilities are: mean, median, std, var, sum, truncated_mean or a callable that receives a pandas data-frame and returns a float.verbose (
bool
) – If true, the winners will be printedrecursive (
bool
) – If true, ALL scores.csv files in all subdirectories of the given tournament_path will be combinedextra_scores_to_use (
str
|None
) – The type of extra-scores to use. If None normal scores will be used. Only effective if scores is None.compile (
bool
) – Takes effect only iftournament_path
is not None. If true, the results will be recompiled from individual world results. This is accurate but slow. If false, it will be assumed that all results are already compiled.independent_test (#) – True if you want an independent t-test
- Return type:
Returns: