You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Following up on #1117 and #1978 to account for functions with few or zero successes (see also #2117).
A better approach may be to check the success rates in the previous data and run a complementary repetition subexperiment which only uses those function & dimension where the number of successes is below some threshold (like 7 or 10 or 15).
Caveat: when complementing experiments that used with-trial independent restarts, the budget limit (timeout budget) of the previous experiment remains as an artifact in the data. That is, complementary repetitions should better be used alternatively to independent restarts within a single run (and with the same timeout budget as the original experiment).
Caveat: using all-zeros as first initial solution (for the first trial) now breaks the experimental procedure because not all trials are conducted identically. A possible remedy could be to have all-zeros with some probability? Having all-zeros always in the first trial/instance improves reproducibility (however the reproducibility of a possible artifact).
The text was updated successfully, but these errors were encountered:
When running several batches adding trials based on the success number, all trials for a single function+dimension must be accessible to the batch where this function+dimension is run and all instances must be run within one batch. Otherwise, we cannot guaranty to have the same number of repetitions for each instance. Unfortunately, this makes a uniform time distribution between batches much more difficult (however the data organization less confusing).
For the time being we don't complement previous data, but instead we add trials depending the observed successes in the current experiment until the budget is exhausted. As we can combine data from different experiments, ignoring previous data for the choice should not be a very relevant limitation.
Following up on #1117 and #1978 to account for functions with few or zero successes (see also #2117).
A better approach may be to check the success rates in the previous data and run a complementary repetition subexperiment which only uses those function & dimension where the number of successes is below some threshold (like 7 or 10 or 15).
Caveat: when complementing experiments that used with-trial independent restarts, the budget limit (timeout budget) of the previous experiment remains as an artifact in the data. That is, complementary repetitions should better be used alternatively to independent restarts within a single run (and with the same timeout budget as the original experiment).
Caveat: using all-zeros as first initial solution (for the first trial) now breaks the experimental procedure because not all trials are conducted identically. A possible remedy could be to have all-zeros with some probability? Having all-zeros always in the first trial/instance improves reproducibility (however the reproducibility of a possible artifact).
The text was updated successfully, but these errors were encountered: