-
Notifications
You must be signed in to change notification settings - Fork 321
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Noisy objective function not taken into account in SimpleExperiment
when suggesting best parameters
#501
Comments
SimpleExperiment
when suggesting best parameters
SimpleExperiment
when suggesting best parameters
@stevemandala ok, thanks for reply! |
SimpleExperiment
when suggesting best parameters
@LukeAI, you'd have to apply this change: f6ccdd7#diff-58c442e1539c8eedb46f78c90254cd976fb7462c0413543fa1c402cd5c6d5f3bR199-R201 to the |
hmm. ok... do you know when the patch will come through on pip? Or if there is some other workaround? |
@LukeAI, the patch should be part of the new stable version (and therefore on pip) within the next two weeks. In the meantime, you could install Ax master like this if you wanted: https://github.com/facebook/Ax#latest-version. A good alternative would be to just not use |
This should now be fixed in latest stable release, 0.1.20. |
@lena-kashtelyan I have upgraded to 0.1.20 and ran the same code, as above - but I am observing the same behaviour - no change in "best parameters" over time. Maybe I have misunderstood how ax works? - I would expect each trial ran to be a an addition of new information which would change the recommended hyperparameters, at least a tiny bit. Since this doesn't happen - I guess I am just seeing the specific hyperparams that gave the best results in one trial, rather than an interpolation based on all available information. Is this correct? If so, is this intentional/expected behaviour? Or am I using ax incorrectly? |
@LukeAI, is it possible for you to get us a reproducible example? |
I've been doing ax hyperparameter optimisation for a DNN doing regression on images like this:
and I have found that the "best parameters" recommended by ax tend to not change very much. This suggests that ax is giving me the hyperparameters that were evaluated and found to give the best result.
The problem with this is that the best results tend to be flukes - the training process is of course noisy and non-determinate and things that make the process more stochastic, such as very high learning rates and small batch sizes, tend to give more varied results. The more varied results will happen to include the best and worst results and on average, be worse than smoother more stable parameter sets. But Ax seems to just take the best result it finds and recommend this.
Is there some way of using Ax in which it will assume a noisy underlying object function and recommend the best hyperparameters based on an interpolation which uses all of the information available to it. rather than just, which trial scored best, one time?
The text was updated successfully, but these errors were encountered: