Skip to content

Commit 490bf8f

Browse files
SamuelGabrielfacebook-github-bot
authored andcommitted
Use batched L-BFGS-B in gen_candidates_scipy and change the definition of batch_limit (#2892)
Summary: Pull Request resolved: #2892 This diff changes our acq optimization in the cases where we use L-BFGS-B to use separate L-BFGS-B states/optimizers for different batch elements. This PR gives us a good speedup for acq. function optimization, if done with L-BFGS-B, something like 4x can be expected. This can be further increased to closer to 10x by changing the `batch_limit`, but we defer this to another PR, in case this might lead to memory issues. Optimize mixed is barely sped up by this PR, but we have follow up fixing that. This PR changes our optimization routine to accept a new option keyword: `max_optimization_problem_aggregation_size`. This keyword is used to mean part of what `batch_limit` was used for before. The new meanings are: 1. `batch_limit` describes the maximum batch you want to evaluate during optimization (with gradients) as you might expect OOMs. 2. `max_optimization_problem_aggregation_size` is the maximum number of acqf. optimizations you would like to treat as a single optimization problem. This is only used for non-parallel optimizers (not batched L-BFGS-B or gen_candidates_torch). ## Why does this PR speedup things? 1. We converge faster, as each subproblem is simpler. I could see this be particularly pronounced for problems with small dimensionality. 2. When some sub-problem is not converged only that sub-problem is re-evaluated, saving total compute as iterations are quicker then. ## Caveats? None, besides the step times slightly increasing, due to the loop in Python in our batched L-BFGS-B implementation. ## Notes on the state of gen_candidates_scipy before this PR (stack) Gen candidates scipy does not properly support fixed features set to None. It will just be ignored in some cases, e.g.: ``` from botorch.generation.gen import gen_candidates_scipy import torch acquisition_function = lambda x: -x[:,:,:].sum(-1).sum(-1) gen_candidates_scipy( initial_conditions=torch.zeros(1, 1, 3), inequality_constraints=[(torch.tensor([0, 1]), torch.tensor([1., 1.]), 1.)], #fixed_features={0: torch.tensor([.8])}, fixed_features={0: None,}, acquisition_function=acquisition_function, lower_bounds=torch.zeros(3), upper_bounds=torch.ones(3), options={"batch_limit": 1, "with_grad": False}, ) ``` and it breaks inequality constraints in other cases: ``` from botorch.generation.gen import gen_candidates_scipy import torch acquisition_function = lambda x: x[:,:,:1].sum(-1).sum(-1) gen_candidates_scipy( initial_conditions=torch.zeros(1, 1, 3), inequality_constraints=[(torch.tensor([0, 1]), torch.tensor([1., 1.]), 1.)], #fixed_features={0: torch.tensor([.8])}, fixed_features={0: 1., 2: None}, acquisition_function=acquisition_function, lower_bounds=torch.zeros(3), upper_bounds=torch.ones(3), options={"batch_limit": 1}, ) ``` That is why I am just dropping support for None feature_fixes generally. Reviewed By: Balandat Differential Revision: D77043251 fbshipit-source-id: 306ac9f9051f7f841599161f3e8f533c69cba179
1 parent 5a591f7 commit 490bf8f

File tree

15 files changed

+490
-271
lines changed

15 files changed

+490
-271
lines changed

botorch/acquisition/knowledge_gradient.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -265,13 +265,15 @@ def evaluate(self, X: Tensor, bounds: Tensor, **kwargs: Any) -> Tensor:
265265
current_model=self.model,
266266
options={**kwargs.get("options", {}), **kwargs.get("scipy_options", {})},
267267
)
268+
# initial_conditions shape: num_restarts x num_fantasies x n x q x d.
268269

269270
_, values = gen_candidates_scipy(
270271
initial_conditions=initial_conditions,
271272
acquisition_function=value_function,
272273
lower_bounds=bounds[0],
273274
upper_bounds=bounds[1],
274275
options=kwargs.get("scipy_options"),
276+
use_parallel_mode=False,
275277
)
276278
# get the maximizer for each batch
277279
values, _ = torch.max(values, dim=0)

0 commit comments

Comments
 (0)