Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Documentation] Documentation in botorch.models.gpytorch [Bug] #2446

Open
brunzema opened this issue Jul 25, 2024 · 1 comment
Open

[Documentation] Documentation in botorch.models.gpytorch [Bug] #2446

brunzema opened this issue Jul 25, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@brunzema
Copy link
Contributor

brunzema commented Jul 25, 2024

🐛 Bug

In the documentation of GPyTorchModel the example provided for the method condition_on_observations (https://github.com/pytorch/botorch/blob/main/botorch/models/gpytorch.py#L206) is not working. Since some time ago, the output dim has to be explicit. Furthermore, after fixing this, the code snippet is still not working properly as condition_on_observations requires to first call the model on some data such that the test independent caches exit.

To reproduce

** Code snippet to reproduce **

import torch
from botorch.models.gp_regression import SingleTaskGP

train_X = torch.rand(20, 2)
train_Y = torch.sin(train_X[:, 0]) + torch.cos(train_X[:, 1])
model = SingleTaskGP(train_X, train_Y)

new_X = torch.rand(5, 2)
new_Y = torch.sin(new_X[:, 0]) + torch.cos(new_X[:, 1])
model = model.condition_on_observations(X=new_X, Y=new_Y)

** Stack trace/error message **

BotorchTensorDimensionError: An explicit output dimension is required for targets. Expected Y with dimension 2 (got Y.dim()=1).


With explicit output dimension

** Code snippet to reproduce **

import torch
from botorch.models.gp_regression import SingleTaskGP

train_X = torch.rand(20, 2)
train_Y = torch.sin(train_X[:, 0]) + torch.cos(train_X[:, 1])
model = SingleTaskGP(train_X, train_Y.unsqueeze(-1))

new_X = torch.rand(5, 2)
new_Y = torch.sin(new_X[:, 0]) + torch.cos(new_X[:, 1])
model = model.condition_on_observations(X=new_X, Y=new_Y)

** Stack trace/error message **

RuntimeError: Fantasy observations can only be added after making predictions with a model so that all test independent caches exist. Call the model on some data first!


With explicit output dimension and evaluating the model on some data

** Code snippet to reproduce **

import torch
from botorch.models.gp_regression import SingleTaskGP

train_X = torch.rand(20, 2)
train_Y = torch.sin(train_X[:, 0]) + torch.cos(train_X[:, 1])
model = SingleTaskGP(train_X, train_Y.unsqueeze(-1))

model.eval()
test_X = torch.rand(10, 2)
model(test_X)

new_X = torch.rand(5, 2)
new_Y = torch.sin(new_X[:, 0]) + torch.cos(new_X[:, 1])
model = model.condition_on_observations(X=new_X, Y=new_Y)

** Stack trace/error message **
No error message

Expected Behavior

I think the expected behavior is clear for the first two cases but the documentation should be adjusted. I feel like in practice condition_on_observations will always be called after there have been some model evaluations (else one should just pass the data during initialization).
For the last one, I am unsure what the expected behavior should be. When creating the model, the output dimension has to be explicit. However, it seems that with the current implementation of condition_on_observations, an explicit output dimension is not required. Maybe this should be changed to be overall consistent.

System information

Please complete the following information:

  • BoTorch Version 0.11.3
  • GPyTorch Version 1.12
  • PyTorch Version 2.4.0
  • Computer OS macOS 14.1

Additional context

Happy to create a pull request for this, but I was unsure about the behavior of condition_on_observations and if it is desired to be only possible after the model has been called and wanted to double check. (There is another example with the same issues here: https://github.com/pytorch/botorch/blob/main/botorch/models/gpytorch.py#L468)

@brunzema brunzema added the bug Something isn't working label Jul 25, 2024
@saitcakmak
Copy link
Contributor

Hi @brunzema. Thanks for reporting. Some of these code examples are quite outdated or they may be incomplete since they are only intended to demonstrate the concept. That being said, we'd be happy to accept a PR fixing these.

A minor correction on your working example: You should have an explicit output dimension for new_Y as well. This works as is since condition_on_observations squeezes the output dimension for the single-output models (since that's how they're represented in GPyTorch) but it may not work in other cases.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants