Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] batch_initial_conditions shouldn't have to satisfy nonlinear_inequality_constraints #2624

Open
slishak-PX opened this issue Nov 14, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@slishak-PX
Copy link
Contributor

slishak-PX commented Nov 14, 2024

🐛 Bug

When using nonlinear_inequality_constraints in optimize_acqf, you need to set batch_initial_conditions, and these ICs need to respect the constraints. This seems unnecessary - SLSQP is capable of starting from an infeasible IC. If the only reason batch_initial_conditions needs to be set is so that the user is forced to provide a feasible IC, then this requirement can be relaxed too.

I imagine the issue can also be solved by using DeterministicModel with an outcome constraint, but this does not work with analytic acquisition functions.

raise ValueError(
"`batch_initial_conditions` must satisfy the non-linear inequality "
"constraints."
)

To reproduce

** Code snippet to reproduce **

import torch
from botorch.acquisition import UpperConfidenceBound
from botorch.fit import fit_gpytorch_mll
from botorch.models import SingleTaskGP
from botorch.optim import optimize_acqf
from gpytorch.mlls import ExactMarginalLogLikelihood

def objective(x):
    return (x[..., 0] - 0.5) ** 2 + x[..., 0]


def constraint(x):
    return (x[..., 0] - 0.5) ** 2 * 50 - 2


n_train = 64
device = torch.device("cpu")

train_x = torch.rand(n_train, 1, dtype=torch.float64, device=device)
train_y = objective(train_x)
con_y = constraint(train_x)

bounds = torch.vstack([torch.zeros(1, 1), torch.ones(1, 1)])

model = SingleTaskGP(
    train_x,
    train_y[:, None],
)
mll = ExactMarginalLogLikelihood(model.likelihood, model)
_ = fit_gpytorch_mll(mll)

acqf = UpperConfidenceBound(model, beta=4)

initial_condition = 0.33
candidates, value = optimize_acqf(
    acqf,
    bounds,
    q=1,
    num_restarts=1,
    raw_samples=1,
    nonlinear_inequality_constraints=[
        (lambda x: -constraint(x), True),
    ],
    batch_initial_conditions=torch.tensor([[[initial_condition]]]),
)

** Stack trace/error message **

ValueError: `batch_initial_conditions` must satisfy the non-linear inequality constraints.

Expected Behavior

If the exception is commented out, the same candidate is found regardless of whether initial_condition is feasible or infeasible, demonstrating that in this case the exception is preventing use cases where it is hard to find a feasible region and you want the optimiser to find it for you.

System information

Please complete the following information:

  • BoTorch Version 0.12.0
  • GPyTorch Version 1.13
  • PyTorch Version 2.5.1+cu124
  • Computer OS: Linux

Additional context

NA

@slishak-PX slishak-PX added the bug Something isn't working label Nov 14, 2024
@slishak-PX slishak-PX changed the title [Bug] batch_initial_conditions must satisfy the non-linear inequality constraints. [Bug] batch_initial_conditions must satisfy nonlinear_inequality_constraints Nov 14, 2024
@slishak-PX slishak-PX changed the title [Bug] batch_initial_conditions must satisfy nonlinear_inequality_constraints [Bug] batch_initial_conditions shouldn't have to satisfy nonlinear_inequality_constraints Nov 14, 2024
@Balandat
Copy link
Contributor

cc @dme65 who introduced this check, but I believe it was in a context where we could not simply use SLSQP so making sure that the ICs satisfied the constraints was necessary. I guess we could potentially make this a warning in cases when we use optimizers that can handle infeasible ICs.

@slishak-PX
Copy link
Contributor Author

I just spotted this explanation in the docstring:

x0: The starting point for SLSQP. We return this starting point in (rare)
cases where SLSQP fails and thus require it to be feasible.

So the motivation wasn't necessarily to enforce a feasible starting point, it was to ensure the returned candidate is feasible even if the optimiser fails.

In this case, it probably makes sense to raise the warning (or exception) only in the case that the optimiser fails to find a feasible point.

Side-note: it might be confusing that nonlinear_inequality_constraints are feasible when the indicator is positive but outcome_constraint has the opposite convention.

@esantorella
Copy link
Member

In this case, it probably makes sense to raise the warning (or exception) only in the case that the optimiser fails to find a feasible point.

That makes sense to me.

Side-note: it might be confusing that nonlinear_inequality_constraints are feasible when the indicator is positive but outcome_constraint has the opposite convention.

Yeah, great point. And we could use better documentation on how and where to define constraints.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants