Skip to content

Commit a80b98f

Browse files
jduerholtfacebook-github-bot
authored andcommitted
Fix docstring of FullyBayesianSingleTaskGP (#2894)
Summary: <!-- Thank you for sending the PR! We appreciate you spending the time to make BoTorch better. Help us understand your motivation by explaining why you decided to make this change. You can learn more about contributing to BoTorch here: https://github.com/pytorch/botorch/blob/main/CONTRIBUTING.md --> ## Motivation Docstring of `FullyBayesianSingleTaskGP` is wrong (looks like a copy-paste thingy). This PR fixes it. ### Have you read the [Contributing Guidelines on pull requests](https://github.com/pytorch/botorch/blob/main/CONTRIBUTING.md#pull-requests)? Yes. Pull Request resolved: #2894 Test Plan: No tests needed, only fix on docstrings. Reviewed By: saitcakmak Differential Revision: D77302373 Pulled By: hvarfner fbshipit-source-id: 3100023293cf38f383523a2c280535527c70e6e9
1 parent c397305 commit a80b98f

File tree

1 file changed

+9
-7
lines changed

1 file changed

+9
-7
lines changed

botorch/models/fully_bayesian.py

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -224,18 +224,19 @@ def sample_concentrations(self, **tkwargs: Any) -> tuple[Tensor, Tensor]:
224224
The prior has a mean value of 1 for each concentration and is very
225225
concentrated around the mean.
226226
"""
227+
d = len(self.indices) if self.indices is not None else self.ard_num_dims
227228
c0 = pyro.sample(
228229
"c0",
229230
pyro.distributions.LogNormal(
230-
torch.tensor([0.0] * self.ard_num_dims, **tkwargs),
231-
torch.tensor([0.1**0.5] * self.ard_num_dims, **tkwargs),
231+
torch.tensor([0.0] * d, **tkwargs),
232+
torch.tensor([0.1**0.5] * d, **tkwargs),
232233
),
233234
)
234235
c1 = pyro.sample(
235236
"c1",
236237
pyro.distributions.LogNormal(
237-
torch.tensor([0.0] * self.ard_num_dims, **tkwargs),
238-
torch.tensor([0.1**0.5] * self.ard_num_dims, **tkwargs),
238+
torch.tensor([0.0] * d, **tkwargs),
239+
torch.tensor([0.1**0.5] * d, **tkwargs),
239240
),
240241
)
241242

@@ -904,13 +905,14 @@ def construct_inputs(
904905

905906

906907
class FullyBayesianSingleTaskGP(AbstractFullyBayesianSingleTaskGP):
907-
r"""A fully Bayesian single-task GP model with the SAAS prior.
908+
r"""A fully Bayesian single-task GP model.
908909
909910
This model assumes that the inputs have been normalized to [0, 1]^d and that
910911
the output has been standardized to have zero mean and unit variance. You can
911912
either normalize and standardize the data before constructing the model or use
912-
an `input_transform` and `outcome_transform`. A dimension-scaled model
913-
[Hvarfner2024vanilla]_ with a Matern-5/2 kernel is used by default.
913+
an `input_transform` and `outcome_transform`. A model with a Matern-5/2 kernel
914+
and dimension-scaled priors on the hyperparameters from [Hvarfner2024vanilla]_
915+
is used by default.
914916
915917
You are expected to use `fit_fully_bayesian_model_nuts` to fit this model as it
916918
isn't compatible with `fit_gpytorch_mll`.

0 commit comments

Comments
 (0)