You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Caption: Two big (thin) crosses appear on the graphs when at least one trial was unsuccessful: they depict the median over the considered functions of +) the larger of the two 90%tiles of runtimes from all successful/unsuccessful single trials of all instances and of x) the 90%tile of the ERT_1 = sum(runtimes) / max(1, #successes) for each instance. Usually, +) < x) and the latter cross might be missing. A small dot indicates the last step of the step function. Roughly speaking, the runtimes between +) and x) are generated by bootstrapping of data. Data should be interpreted with great care (or not at all) beyond x) and also beyond +) if the termination related to +) was not induced by the algorithm but imposed by the user.
Question: if all instances were run once with the same budget and are unsuccessful, both crosses should be at the same place? Yet, it seems they are not!? EDIT: fixed.
Feature request: The x cross is only relevant when instances_are_uniform. Hence we should probably omit this cross otherwise (like for bbob-biobj)? EDIT: the cross considers only within-instance repetitions, that is, it does not assume uniformity of different instances. However, in the multiobjective case, also instance repetitions seem not meaningful to generate performance data via bootstrapping. They should rather be considered as experiment repetitions.
The text was updated successfully, but these errors were encountered:
This is a tentative caption for the new crosses which are in better alignment with the new experimental restarts setup.
Caption: Two big (thin) crosses appear on the graphs when at least one trial was unsuccessful: they depict the median over the considered functions of +) the larger of the two 90%tiles of runtimes from all successful/unsuccessful single trials of all instances and of x) the 90%tile of the ERT_1 = sum(runtimes) / max(1, #successes) for each instance. Usually, +) < x) and the latter cross might be missing. A small dot indicates the last step of the step function. Roughly speaking, the runtimes between +) and x) are generated by bootstrapping of data. Data should be interpreted with great care (or not at all) beyond x) and also beyond +) if the termination related to +) was not induced by the algorithm but imposed by the user.
Question: if all instances were run once with the same budget and are unsuccessful, both crosses should be at the same place? Yet, it seems they are not!? EDIT: fixed.
Feature request:
TheHence we should probably omit this cross otherwise (like forx
cross is only relevant wheninstances_are_uniform
.bbob-biobj
)? EDIT: the cross considers only within-instance repetitions, that is, it does not assume uniformity of different instances. However, in the multiobjective case, also instance repetitions seem not meaningful to generate performance data via bootstrapping. They should rather be considered as experiment repetitions.The text was updated successfully, but these errors were encountered: