You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ES optimizers have the reputation of being slow, so libcmaes tries to yield the best performances both in terms of computational cost of the optimization step, and wrt the number of function evaluations.
This ticket is about benchmarking libcmaes against other existing implementations. My current list contains:
Results and details are below. The main information being that libcmaes beats all other tested implementations above 10-D, and equals them for lower dimensions. Shark is last and the C version comes in between.
====Settings====
Tested Shark, C version and libcmaes
Tested CMA-ES, active CMA-ES (libcmaes only) and VD-CMA (libcmaes and Shark)
All tests are on Rosenbrock, in dimensions, 2,3,5,10,20,40,50,100 and sometimes 200 (VD-CMA)
All algorithms were run 10 times in every dimension, and results are the mean values, and run which failed to find the optima were removed from the set
All software were compiled with g++ and optimization flag -O4 for best performances
Shark didn't offer useful stopping criteria, so I did add a stagnation criteria in order to detect failures on Rosenbrock. The lack of stopping criteria may make Shark's performances slightly better in comparison to C and libcmaes. Sharks' VDCMA was buggy and I told them in order that they could provide a fix before comparison
C version was modified in order to remove all file read/write from within the loop, and eigen value decomposition was forced at every iteration in order to compare true computing performances (@nikohansen I wonder how generic your 'return on time percentage' trick can be, in order to possibly integrate a similar one in the libcmaes)
====Results====
These are the mean values over 10 runs for all algorithms' successful runs only.
Note: the reported time use the system's clock and can be slightly affected by the operating system operations. In other words, the variance is higher on very low (~20 milliseconds) results.
The graph above are in y-logscale: libcmaes is most of the time twice faster than Shark for standard CMA-ES, and one third faster than the C version. Differences slightly vary with dimension.
EDIT: table of results for CMA-ES in 100-D (mean over 10 runs), all timings in milliseconds:
ES optimizers have the reputation of being slow, so libcmaes tries to yield the best performances both in terms of computational cost of the optimization step, and wrt the number of function evaluations.
This ticket is about benchmarking libcmaes against other existing implementations. My current list contains:
It will be possible to add others as needed.
The text was updated successfully, but these errors were encountered: