evaluate_tournament
- negmas.tournaments.evaluate_tournament(tournament_path, scores=None, stats=None, world_stats=None, type_stats=None, agent_stats=None, metric='mean', verbose=False, recursive=True, extra_scores_to_use=None, compile=True)[source]
Evaluates the results of a tournament
- Parameters:
tournament_path (
UnionType
[str
,Path
,None
]) – Path to save the results to. If scores is not given, it is also used as the source of scores. Pass None to avoid saving the results to disk.scores (
Optional
[DataFrame
]) – Optionally the scores of all agents in all world runs. If not given they will be read from the file scores.csv in tournament_pathstats (
Optional
[DataFrame
]) – Optionally the stats of all world runs. If not given they will be read from the file stats.csv in tournament_pathworld_stats (
Optional
[DataFrame
]) – Optionally the aggregate stats collected in WorldSetRunStats for each world settype_stats (
Optional
[DataFrame
]) – Optionally the aggregate stats collected in AgentStats for each agent typeagent_stats (
Optional
[DataFrame
]) – Optionally the aggregate stats collected in AgentStats for each agent instancemetric (
Union
[str
,Callable
[[DataFrame
],float
]]) – The metric used for evaluation. Possibilities are: mean, median, std, var, sum, truncated_mean or a callable that receives a pandas data-frame and returns a float.verbose (
bool
) – If true, the winners will be printedrecursive (
bool
) – If true, ALL scores.csv files in all subdirectories of the given tournament_path will be combinedextra_scores_to_use (
Optional
[str
]) – The type of extra-scores to use. If None normal scores will be used. Only effective if scores is None.compile (
bool
) – Takes effect only if tournament_path is not None. If true, the results will be recompiled from individual world results. This is accurate but slow. If false, it will be assumed that all results are already compiled.independent_test (#) – True if you want an independent t-test
- Return type:
Returns: