-
Notifications
You must be signed in to change notification settings - Fork 405
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] An exception should be raised when the training data has requires_grad=True #2253
Comments
Is there a particular use case you have for training data with |
Thanks for reporting this issue. I'm not surprised this fails, because BoTorch figures out which tensors are parameters that need to be optimized by looking at which have |
Sorry, perhaps I should give a little bit context. This is actually an issue I ran into without deliberately setting requires_grad. The example here is just a demonstration for anyone here to reproduce the bug. I want to use risk-averse BO for model predictive control, in which I first built a model in PyTorch that maps my control variable to my objective. However, the fit_gpytorch method always failed until I figured out that the issue went away until I used torch.no_grad() |
Just to make sure there is no confusion here, you are not putting the It may make sense on our end to explicitly check whether the training data requires grad when calling |
No I am not putting the fit_gpytorch_mll() call into a no_grad() context. Yes, I agree with your suggestion to explicitly check whether the training data requires grad when calling fit_gpytorch_mll to emit a more informative error message. |
Summary: Addresses pytorch#2253 When the model inputs have gradients enabled, we get errors during model fitting. Differential Revision: D60184299
I looked into adding an exception in model input validation but disallowing inputs that require gradients breaks acquisition functions that utilize fantasy models. If we wanted to prevent this error, the validation would have to happen in |
🐛 Bug
The train_X and train_Y that go into SingleTaskGP will lead to a failing fit_gpytorch_mll if they have require_grad=True, i.e. grad_fn is not None. The error goes away when the flag require_grad=False
To reproduce
** Code snippet to reproduce **
** Stack trace/error message **
Expected Behavior
An error message is thrown:
"Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward"
System information
Please complete the following information:
Additional context
The text was updated successfully, but these errors were encountered: