You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
During our first COCO sprint, we discussed the idea of a general long-term storage format for benchmarking data (an idea put forward by the IOH Profiler people together with @olafmersmann). A first suggestion from our side is the following.
For each experiment (with a concrete experiment id or timestamp or ...), we store its metadata (things that stay constant over the entire experiment) in a metadata table like this:
All columns until (and including) indicator value look like they have to be mandatory (at least for most experiments and certainly for all COCO data, produced so far)
All other columns are non-mandatory and could be different in different experiments tables (which are then not compatible anymore).
The #funevals column is rather an "effort spent" column and must be a monotonously increasing function, for example in the case of constrained problems the number of combined f- and g-evaluations. This means also that it might, in some cases, contain vectors such as the number of calls to each individual objective function if they are callable independently (and need, for example, different times to evaluate).
The indicator value column contains the objective function to be optimized, such as the best so far f-value in the unconstrained, single-objective case, a quality indicator in the multiobjective case, the Lagrangian in the constrained case, ...
The target reached column seems a nice-to-have in the COCO context, even if we don't write these data ourselves right now (but it should be easy to reconstruct because the targets are fixed in our case.
Entries in the same experiment table should be, in principle, comparable with each other.
Note that this is a first draft and will be hopefully extended here.
The text was updated successfully, but these errors were encountered:
During our first COCO sprint, we discussed the idea of a general long-term storage format for benchmarking data (an idea put forward by the IOH Profiler people together with @olafmersmann). A first suggestion from our side is the following.
For each experiment (with a concrete experiment id or timestamp or ...), we store its metadata (things that stay constant over the entire experiment) in a metadata table like this:
To be discussed: which entries are mandatory and which are optional.
For each experiment, we then can then store the single evaluations (or a subset thereof) in a big table like this one:
Our ideas behind all this are:
indicator value
look like they have to be mandatory (at least for most experiments and certainly for all COCO data, produced so far)#funevals
column is rather an "effort spent" column and must be a monotonously increasing function, for example in the case of constrained problems the number of combined f- and g-evaluations. This means also that it might, in some cases, contain vectors such as the number of calls to each individual objective function if they are callable independently (and need, for example, different times to evaluate).indicator value
column contains the objective function to be optimized, such as the best so far f-value in the unconstrained, single-objective case, a quality indicator in the multiobjective case, the Lagrangian in the constrained case, ...target reached
column seems a nice-to-have in the COCO context, even if we don't write these data ourselves right now (but it should be easy to reconstruct because the targets are fixed in our case.Note that this is a first draft and will be hopefully extended here.
The text was updated successfully, but these errors were encountered: