Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Noisy objective function not taken into account in SimpleExperiment when suggesting best parameters #501

Closed
LukeAI opened this issue Feb 16, 2021 · 8 comments
Labels
bug Something isn't working fixready Fix has landed on master.

Comments

@LukeAI
Copy link

LukeAI commented Feb 16, 2021

I've been doing ax hyperparameter optimisation for a DNN doing regression on images like this:

    exp = SimpleExperiment(
        name=EXPERIMENT_NAME,
        search_space=dnn_search_space,
        evaluation_function=train_cross,
        objective_name="regression_error",
        minimize=True,
    )

    # sobol sample search space
    for i in range(20):
        exp.new_trial(generator_run=sobol.gen(1))

    # converge on best hyperparams
    best_arm = None
    for i in range(50):
        gpei = Models.GPEI(experiment=exp, data=exp.eval())
        generator_run = gpei.gen(1)
        best_arm, _ = generator_run.best_arm_predictions
        exp.new_trial(generator_run=generator_run)
        best_parameters = best_arm.parameters
        print(str(i) + " best params " + str(best_parameters))

and I have found that the "best parameters" recommended by ax tend to not change very much. This suggests that ax is giving me the hyperparameters that were evaluated and found to give the best result.

The problem with this is that the best results tend to be flukes - the training process is of course noisy and non-determinate and things that make the process more stochastic, such as very high learning rates and small batch sizes, tend to give more varied results. The more varied results will happen to include the best and worst results and on average, be worse than smoother more stable parameter sets. But Ax seems to just take the best result it finds and recommend this.

Is there some way of using Ax in which it will assume a noisy underlying object function and recommend the best hyperparameters based on an interpolation which uses all of the information available to it. rather than just, which trial scored best, one time?

@stevemandala
Copy link
Contributor

Hey @LukeAI, thanks for raising this. I believe this was caused by a bug in simple experiment assuming 0.0 SEM. We recently pushed a fix on master, which should ensure we don't default to noise-less modeling when SEM isn't provided: f6ccdd7

@stevemandala stevemandala added bug Something isn't working fixready Fix has landed on master. labels Feb 16, 2021
@lena-kashtelyan lena-kashtelyan changed the title Advice re. Ax / noisy object function Noisy objective function not taken into account in SimpleExperiment when suggesting best parameters Feb 16, 2021
@LukeAI LukeAI changed the title Noisy objective function not taken into account in SimpleExperiment when suggesting best parameters Noisy objective function not taken into account in SimpleExperiment when suggesting best parameters Feb 17, 2021
@LukeAI
Copy link
Author

LukeAI commented Feb 17, 2021

@stevemandala ok, thanks for reply!
In the meantime how can I set SEM? It doesn't appear to be in the constructor

@LukeAI LukeAI changed the title Noisy objective function not taken into account in SimpleExperiment when suggesting best parameters Noisy objective function not taken into account in SimpleExperiment when suggesting best parameters Feb 17, 2021
@lena-kashtelyan
Copy link
Contributor

@LukeAI, you'd have to apply this change: f6ccdd7#diff-58c442e1539c8eedb46f78c90254cd976fb7462c0413543fa1c402cd5c6d5f3bR199-R201 to the SimpleExperiment code.

@LukeAI
Copy link
Author

LukeAI commented Feb 18, 2021

hmm. ok... do you know when the patch will come through on pip? Or if there is some other workaround?

@lena-kashtelyan
Copy link
Contributor

lena-kashtelyan commented Feb 18, 2021

@LukeAI, the patch should be part of the new stable version (and therefore on pip) within the next two weeks. In the meantime, you could install Ax master like this if you wanted: https://github.com/facebook/Ax#latest-version.

A good alternative would be to just not use SimpleExperiment, as it is intended for deprecation in favor of the Service API in the near future in any case (tutorial: https://ax.dev/tutorials/gpei_hartmann_service.html). If you wanted to have some control over the generation strategy used by the AxClient (so you can set custom number of trials to generate from Sobol, for instance), check out #199 for how to configure a custom generation strategy and pass it to AxClient.

@lena-kashtelyan
Copy link
Contributor

This should now be fixed in latest stable release, 0.1.20.

@LukeAI
Copy link
Author

LukeAI commented Feb 26, 2021

@lena-kashtelyan I have upgraded to 0.1.20 and ran the same code, as above - but I am observing the same behaviour - no change in "best parameters" over time.

Maybe I have misunderstood how ax works? - I would expect each trial ran to be a an addition of new information which would change the recommended hyperparameters, at least a tiny bit.

Since this doesn't happen - I guess I am just seeing the specific hyperparams that gave the best results in one trial, rather than an interpolation based on all available information.

Is this correct? If so, is this intentional/expected behaviour? Or am I using ax incorrectly?

@ldworkin
Copy link
Contributor

@LukeAI, is it possible for you to get us a reproducible example?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working fixready Fix has landed on master.
Projects
None yet
Development

No branches or pull requests

4 participants