Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Matching" libcmaes results with cma_es in R #193

Open
LeighKorbel opened this issue Nov 27, 2018 · 9 comments
Open

"Matching" libcmaes results with cma_es in R #193

LeighKorbel opened this issue Nov 27, 2018 · 9 comments

Comments

@LeighKorbel
Copy link

I am trying to translate an R script into C++ that employs the cmaes library:

https://www.rdocumentation.org/packages/cmaes/versions/1.0-11/topics/cma_es
https://github.com/cran/cmaes/blob/master/R/cmaes.R

I am a novice with this algorithm and was hoping I might be able to get some helpful tips/advice. I am using libcmaes and trying to validate the results produced in R (to the extent that they can be validated, since RNG's across languages using the same seed will still produce different numbers, despite using the same generator). I am using the 2d sphere as the fitness function. The R script that I am trying to replicate with libcmaes is rather straightforward and shown below:

library(cmaes)

seednum<-100
set.seed(seednum, kind="Mersenne-Twister")

ctl_list<-list(diag.sigma=TRUE)

initpar<-c(7.488549, 3.403088, 7.773092, 4.331335, 1.881067)

objfun<-function(pop){

  print(pop)
  
  val<-0
  
  for (element in pop)
  {
    val<-val+(element*element)
  }
  
  val
}

res<-cma_es(initpar, objfun, lower=0, upper=10, control=ctl_list)

print(res)

It appears from the print statements that the vast majority of the fitness function evaluations result in constraint violations (5596 of 5688), and the parameters seem to default to either the lower or upper bounds when this occurs. In addition, by the final iteration sigma has grown to be extremely large (2.424594e+154). The best set of parameters (0.1922440, 0.1913787, 1.5348315, 1.5386994, 0.3485390) correspond to the 155th evaluation, and the value of the fitness function corresponding to these parameters is 4.918367.

The corresponding implementation using libcmaes is below:

#include "cmaes.h"
#include "math.h"
#include <iostream>

using namespace libcmaes;

FitFunc objfun = [](const double *x, const int N)
{
   for (int i = 0; i < N; i++)
      std::cout << x[i] << " ";
   std::cout << "\n";

   double val = 0.0;

   for (int i = 0; i < N; i++)
      val += x[i]*x[i];

   return val;
};

int main(int argc, char *argv[])
{
   std::vector<double> initpar = {7.480042, 2.880649, 8.380858, 4.620689, 1.910938};

   int D = initpar.size();

   double sigma = 0.5;
   int lambda = 4 + floor(3 * log(D));	
   int seednum = 100;

   double lbounds[D], ubounds[D];

   for (int i = 0; i < D; i++)
   {
      lbounds[i] = 0.0;
      ubounds[i] = 10.0;
   }

   GenoPheno<pwqBoundStrategy> gp(lbounds, ubounds, D);

   CMAParameters<GenoPheno<pwqBoundStrategy>> cmaparams(initpar, sigma, lambda, seednum, gp);

   cmaparams.set_max_iter(100 * pow(D, 2));
   //cmaparams.set_ftarget(4.918367);
   cmaparams.set_xtolerance(0.5e-12);

   CMASolutions cmasols = cmaes<GenoPheno<pwqBoundStrategy>>(objfun, cmaparams);
   
   cmasols.print(std::cout, 0, gp);
   std::cout << std::endl;

   std::cout << "optimization took " << cmasols.elapsed_time() / 1000.0 << " seconds\n";

   return cmasols.run_status();
}

Here sigma, lambda, the maximum number of iterations, and the stopping tolerance are set to the defaults used in R (they may be redundant if they happen to be the same by default in libcmaes). The seed number and bounds match those chosen in the R script. The output is below:

best solution => f-value=3.00969e-19 / fevals=944 / sigma=0.000234238 / iter=118 / elaps=7ms / x=3.02083e-11 4.24404e-10 1.94338e-10 1.07636e-13 2.86654e-10

Here the function value, sigma, and the optimal parameters are all very small when compared to the results in R. However, when I use cmaparams.set_ftarget(4.918367) to set the target value to the result found in R, I get results in the ballpark of what was found there. It seems clear to me that the issue seems to be with how the bounds are implemented in the R version of CMA-ES (in which the constraints are allowed to be violated). I was hoping to get some clarification on whether or not there is something in libcmaes that I am missing that would assist me in "replicating" (to the extent that they can be replicated) the results determined in R.

@LeighKorbel
Copy link
Author

I think that it may be possible that the R implementation of CMA-ES may be using a different constraint handling method than libcmaes? I'm not quite certain, but it seems plausible that cma_es in R is using a penalty function and libcmaes is using a decoder function (mapping genotypes to phenotypes)?

@beniz
Copy link
Collaborator

beniz commented Nov 28, 2018

Hi, very possible indeed. See the 'boundary and constraint handling' section of http://cma.gforge.inria.fr/cmaes_sourcecode_page.html#testing

libcmaes uses a piecewise quadraric function. See https://github.com/beniz/libcmaes/blob/master/src/pwq_bound_strategy.cc

@nikohansen
Copy link
Collaborator

nikohansen commented Nov 29, 2018

the parameters seem to default to either the lower or upper bounds when this occurs.

this is likely to be a bad way to handle bound constraints. This section gives a quick overview of possible ways how to handle bounds.

In addition, by the final iteration sigma has grown to be extremely large (2.424594e+154). The best set of parameters (0.1922440, 0.1913787, 1.5348315, 1.5386994, 0.3485390) correspond to the 155th evaluation, and the value of the fitness function corresponding to these parameters is 4.918367.

This must be considered as a failure of the algorithm. As a simple first step, check what happens if the lower bounds are set to -10.

I was hoping to get some clarification on whether or not there is something in libcmaes that I am missing that would assist me in "replicating" (to the extent that they can be replicated) the results determined in R.

AFAICS, it would certainly not be desirable to replication the results from the R code, whereas libcmaes seems to perform as expected and desired. This section gives some ideas how to reliably test CMA-ES code. From my experience it is difficult to see how to avoid comprehensive (graphical) output to be reasonably certain that the code works as desired.

@LeighKorbel
Copy link
Author

LeighKorbel commented Nov 29, 2018

Dr. Hansen, you are in fact the next person I was going to contact regarding these issues. Based on the commit history in libcmaes, it seems you have helped contribute to this library, and you are also responsible for pycma (not to mention the algorithm itself). I was trying to get to the bottom of this while admittedly trying to avoid the need to completely understand all of the subtleties of the algorithm, as I am simply translating an R script that employs the algorithm into C++11 to increase performance. I ran an example of pycma with the sphere as the fitness function and got results consistent with libcmaes:

import cma
import numpy as np
from math import log, floor

x = np.array([7.480042, 2.880649, 8.380858, 4.620689, 1.910938])
sigma = 0.5
lam = 4 + floor(3 * log(x.size));

'''
def objective_fct(x):
    np.set_printoptions(precision=6)
    print(x)
    val = 0.0
    N = x.size
    for element in x:
        val = val + (element * element)
    return val
'''

es = cma.CMAEvolutionStrategy(x, sigma, {'popsize': lam, 'seed': 100, 'tolfun': 0.5e-12, 'bounds': [0, 10]})

logger = cma.CMADataLogger().register(es)

while not es.stop():
    X = es.ask()			# get list of new solutions
    fit = [cma.ff.sphere(x) for x in X] # evaluate candidates
    #fit = [objective_fct(x) for x in X]  # evaluate candidates
    es.tell(X, fit)			# besides for termination only the ranking in fit is used

    # display some output
    logger.add()			# add a "data point" to the log, writing in files
    es.disp()				# uses option verb_disp with default 100

print('termination:', es.stop())
cma.s.pprint(es.best.__dict__)

which results in the output:

(4_w,8)-aCMA-ES (mu_w=2.6,w_1=52%) in dimension 5 (seed=100, Wed Nov 28 16:39:25 2018)
Iterat #Fevals   function value  axis ratio  sigma  min&max std  t[m:s]
    1      8 1.386749161321259e+02 1.0e+00 5.02e-01  5e-01  5e-01 0:00.0
    2     16 1.371489677676714e+02 1.4e+00 6.14e-01  6e-01  7e-01 0:00.0
    3     24 1.206537692512402e+02 1.5e+00 7.82e-01  7e-01  8e-01 0:00.0
  100    800 2.252395399073447e-16 2.2e+00 8.62e-04  6e-05  9e-05 0:00.1
  113    904 7.938685725558456e-18 2.4e+00 2.49e-04  1e-05  2e-05 0:00.1
('termination:', {'tolfunhist': 1e-12})
{'evals': 857,
 'evalsall': 904,
 'f': 4.9345684183753864e-18,
 'last': <cma.utilities.utils.BlancClass object at 0x7f588beaf390>,
 'x': array([1.24941026e-09, 1.66340641e-09, 5.47965378e-10, 5.53485834e-10,
       2.98435700e-12]),
 'x_geno': array([-0.05001581, -0.04998176, -0.04998953, -0.05001052, -0.05000077])}

which is clearly similar to libcmaes. Dr. Benazera was kind enough to respond previously with a link that I believe is hosted by you, in which various implementations of CMA-ES across many languages are listed. The R library cmaes is among them. It appears it was written by Olaf Mersmann and David Arnu. Did you assist them in any way? Since the sphere is a standard test objective function, the fact that the algorithm is failing (note that it fails with the same results whether I write the objective function myself or employ f_sphere(x) from within the API) here seems to be a cause for concern. The R script that I am translating does not use the sphere, but I wanted to start with a "minimal example" to ensure that results are generally consistent. Does the fact that the R library handles bound constraints in this manner mean that it is incompatible with libcmaes and pycma?

I also used your suggestion and changed the lower bounds to -10. The results from doing so with libcmaes and pycma are consistent, but they are not with cmaes in R. There are no longer any constraint violations in the R output message, but the parameter values (as well as the corresponding fitness function value) are so incredibly small as to amount to zero, with a stop condition message that "All standard deviations smaller than tolerance." The results for each with the new bounds are below:

libcmaes:

best solution => f-value=2.4514e-16 / fevals=1440 / sigma=8.04606e-07 / iter=180 / elaps=14ms / x=-5.09238e-09  7.52029e-09 -4.50174e-09  8.24281e-09  8.62803e-09
optimization took 0.014 seconds

pycma:

(4_w,8)-aCMA-ES (mu_w=2.6,w_1=52%) in dimension 5 (seed=100, Thu Nov 29 10:10:48 2018)
Iterat #Fevals   function value  axis ratio  sigma  min&max std  t[m:s]
    1      8 1.386749161321259e+02 1.0e+00 5.02e-01  5e-01  5e-01 0:00.0
    2     16 1.371489677676714e+02 1.4e+00 6.14e-01  6e-01  7e-01 0:00.0
    3     24 1.206537692512402e+02 1.5e+00 7.82e-01  7e-01  8e-01 0:00.0
  100    800 4.542180022276652e-07 1.7e+00 3.68e-03  3e-04  4e-04 0:00.1
  182   1456 2.509621746027303e-16 1.6e+00 9.43e-07  8e-09  1e-08 0:00.2
('termination:', {'tolfun': 5e-13, 'tolfunhist': 1e-12})
{'evals': 1452,
 'evalsall': 1456,
 'f': 2.5096217460273033e-16,
 'last': <cma.utilities.utils.BlancClass object at 0x7f4690f31390>,
 'x': array([ 8.22569430e-09,  5.19281468e-09, -3.05476571e-09,  1.95328341e-09,
        1.19661144e-08]),
 'x_geno': array([ 8.22569430e-09,  5.19281468e-09, -3.05476571e-09,  1.95328341e-09,
        1.19661144e-08])}

cmaes in R:

$par
[1]  1.051717e-81  9.488257e-82 -6.867787e-82  2.862592e-83  1.167765e-81

$value
[1] 3.842539e-162

$counts
function gradient 
   12768       NA 

$convergence
[1] 0

$message
[1] "All standard deviations smaller than tolerance."

$constr.violations
[1] 0

$diagnostic
$diagnostic$sigma
[1596] 3.681423e-69

So I suppose the question becomes, is there something inherently wrong with the R libraries implementation of CMA-ES? I appreciate both of your input and I apologize in advance if any of my questions are the result of my naive understanding of the algorithm.

As an aside, I am noticing that when I lower the stopping tolerance in pycma (i.e. 'tolfun') from 0.5e-12 to say 0.5e-3 the algorithm converges more quickly, but with libcmaes doing the same (using cmaparams.set_xtolerance(0.5e-3)) does not result in faster convergence and the results are consistent with using 0.5e-12? Am I missing something here?

@nikohansen
Copy link
Collaborator

The R library cmaes is among them. It appears it was written by Olaf Mersmann and David Arnu. Did you assist them in any way?

No, but I would give them the credit to be able to implement CMA-ES.

Since the sphere is a standard test objective function, the fact that the algorithm is failing (note that it fails with the same results whether I write the objective function myself or employ f_sphere(x) from within the API) here seems to be a cause for concern.

As the experiment I have seen was to my understanding on the bounded sphere, all could be caused by a way too naive boundary handling.

Does the fact that the R library handles bound constraints in this manner mean that it is incompatible with libcmaes and pycma?

Handling of bound constraints is not part of "standard" CMA-ES.

the parameter values (as well as the corresponding fitness function value) are so incredibly small as to amount to zero, with a stop condition message that "All standard deviations smaller than tolerance."

That looks reasonably fine. It just means that not the same termination conditions are applied.

So I suppose the question becomes, is there something inherently wrong with the R libraries implementation of CMA-ES?

As written above: to answer this question we need much more comprehensive experimentation and output. To find that it works on the sphere function is certainly not sufficient.

As an aside, I am noticing that when I lower the stopping tolerance in pycma (i.e. 'tolfun') from 0.5e-12 to say 0.5e-3 the algorithm converges more quickly,

I am pretty sure your interpretation is wrong. The algorithm terminates earlier, but it has also achieve lesser precision. That makes perfectly sense. I strongly suggest to use the cma.plot() method to visualize the output (just type cma.plot() in Python, IPython or Jupyter). Like this you could immediately see the convergence speed.

but with libcmaes doing the same (using cmaparams.set_xtolerance(0.5e-3)) does not result in faster convergence and the results are consistent with using 0.5e-12? Am I missing something here?

My hunch is that the new parameter setting was ignored. BTW, I believe you can use cma.plot() just the same on libcmaes.

@LeighKorbel
Copy link
Author

Dr. Hansen, many thanks for the thorough response. As I originally stated, the end goal for me here is to be able to replicate the results being produced with cmaes in R with libcmaes. You mentioned in your last response that:

That looks reasonably fine. It just means that not the same termination conditions are applied.

in regards to the optimization results coming from the R script (which do not match the very similar results produced with libcmaes and pycma). I have found that I can produce the same results with cmaes in R by changing the stopping tolerance "stop.tolx" to a much larger value than 0.5e-12.

The issue I am having now is regarding changing the tolerances in libcmaes. There are obviously multiple different avenues for halting the algorithm. With pycma I am changing three values to ensure that the results are consistent with cmaes in R:

'tolx': 1.4e-20, 'tolfun': 0.5e-64, 'tolfunhist': 1e-64

Where tolfun and tolfunhist were set arbitrarily small to ensure that termination is triggered by tolx. When I do this, the algorithm terminates in the range of where I want it to (i.e. it is in the range of the results in R that I am trying to match). I am having trouble doing the same with libcmaes. I see in the API that there are two CMAParameters member functions that should achieve the same as within pycma:

cmaparams.set_xtolerance(1.4e-20); // sets parameter tolerance as stopping criteria for TolX
cmaparams.set_ftolerance(1e-64); // sets function tolerance as stopping criteria for TolHistFun

I cannot seem to find the equivalent member function corresponding to 'tolfun' in pycma. This is a problem right now because as I mentioned previously dropping the xtolerance is not having any effect, and that is likely due another stopping criteria being met first (I suspect it may be the equivalent to tolfun).

The nice thing about pycma is that the output is telling you which stopping criteria triggered the algorithm to stop. That might be a useful addition to libcmaes (if it doesn't already exist). If I can set the other criteria arbitrarily low so as to allow a much smaller xtol to trigger the stop, I think I can finally match the R cmaes results that I am trying to translate into C++. Thanks again for your assistance!

@nikohansen
Copy link
Collaborator

nikohansen commented Dec 3, 2018

I can't speak to the libcmaes specifics, but just want remark that 'tolx': 1.4e-20 is generally not different from zero, unless the optimum of the function is in zero at least in some parameters (which means, the function is an artificial test function). The relevant argument is that 1+1e-20 equals numerically to 1. (An analogous argument holds for the f-value and 'tolfun').

Or in other words, the results you are trying to match are specific to the situation that the optimum is in zero.

@LeighKorbel
Copy link
Author

On that note, the last question I have is in regards to the stopping criteria in libcmaes. I can't quite grasp what is causing the algorithm to terminate when I set both the xtolerance and ftolerance anywhere in the range of 1e-12 to zero. When doing that I get the following values for the sphere:

best solution => f-value=6.67924e-16 / fevals=1368 / sigma=1.05139e-06 / iter=171 / elaps=24ms / x=-1.20802e-08 1.87715e-08 8.66736e-09 -2.89913e-09 9.27866e-09

I tried going through each of the stopping criteria and setting them equal to false and the only thing that had any effect was setting TOLHISTFUN to false:

   enum CMAStopCritType
   {
      CONT = 0,
      AUTOMAXITER = 7,
      TOLHISTFUN = 1, 		// convergence
      EQUALFUNVALS = 5, 	// partial success, user error
      TOLX = 2, 		// partial success
      TOLUPSIGMA = -13,
      STAGNATION = 6, 		// partial success
      CONDITIONCOV = -15, 	// error, user action needed
      NOEFFECTAXIS = 3, 	// partial success
      NOEFFECTCOOR = 4, 	// partial success
       MAXFEVALS = 8,
      MAXITER = 9,
      FTARGET = 10 		// success
   };

   cmaparams.set_stopping_criteria(1, false);	

When I do that I get:

best solution => f-value=3.95011e-120 / fevals=9856 / sigma=3.98217e-48 / iter=1232 / elaps=111ms / x=-3.22247e-61 -1.3958e-60 8.64685e-61 7.07763e-61 -8.05849e-61

The reason I ask is because the stopping tolerances seem to be handled differently in the R library and the xtolerance that I am trying to match in R halts the algorithm with values in the range of:

-5.567399e-21 5.749726e-21 1.027422e-20 -7.806447e-21 4.617487e-21

and I can't seem to achieve the same here because apparently something other than the xtolerance is causing the algorithm to halt.

@nikohansen
Copy link
Collaborator

I am not sure what the question is. As you found out the TOLHISTFUN is what terminates the algorithm, now setting this to zero, then the xtolerance should become effective?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants