Skip to content

Releases: cics-nd/gptorch

0.3.2

04 Nov 04:41
Compare
Choose a tag to compare

gptorch 0.3

Change log

0.3.0

Changes breaking backward compatibility:

  • GPR, VFE, SVGP: training inputs order is changed from (y, x) to (x, y) on
    model __init__()s.
  • .predict() functions return the same type as the inputs provided
    (numpy.ndarray->numpy.ndarray, torch.Tensor->torch.Tensor)
  • Remove util.as_variable()
  • Remove util.tensor_type()
  • Remove util.KL_Gaussian()
  • Remove util.gammaln()
  • GPModel method .loss() generally replaces .compute_loss().
  • .compute_loss() methods in models generally renamed to .log_likelihood()
    and signs flipped to reflect the fact that the loss is generally the negative
    LL.

Changes not breaking backward compatibility:

  • GPR, VFE: Allow specifying training set on .compute_loss() with x, y kwargs
  • GPR, VFE: Allow specifying training inputs on ._predict() with x kwarg
  • GPU supported with .cuda()
  • Remove GPModel.evaluate()
  • Don't print inducing inputs on sparse GP initialization
  • Suport for priors in gptorch.model.Models

0.3.1

  • Fix some places where .compute_loss() wasn't replaced, causing GPModel.optimize() not to work.

0.3.2

  • Issue 20 related to installing gptorch on top of pip-installed versions of PyTorch with non-standard device configurations.
  • Issue 22 where importing gptorch changes the default dtype in PyTorch from single- to double-precision.
  • Added gptorch.__version__

Authors

0.3.1

11 Jan 04:29
a80296c
Compare
Choose a tag to compare

gptorch 0.3

Change log

0.3.0

Changes breaking backward compatibility:

  • GPR, VFE, SVGP: training inputs order is changed from (y, x) to (x, y) on
    model __init__()s.
  • .predict() functions return the same type as the inputs provided
    (numpy.ndarray->numpy.ndarray, torch.Tensor->torch.Tensor)
  • Remove util.as_variable()
  • Remove util.tensor_type()
  • Remove util.KL_Gaussian()
  • Remove util.gammaln()
  • GPModel method .loss() generally replaces .compute_loss().
  • .compute_loss() methods in models generally renamed to .log_likelihood()
    and signs flipped to reflect the fact that the loss is generally the negative
    LL.

Changes not breaking backward compatibility:

  • GPR, VFE: Allow specifying training set on .compute_loss() with x, y kwargs
  • GPR, VFE: Allow specifying training inputs on ._predict() with x kwarg
  • GPU supported with .cuda()
  • Remove GPModel.evaluate()
  • Don't print inducing inputs on sparse GP initialization
  • Suport for priors in gptorch.model.Models

0.3.1

  • Fix some places where .compute_loss() wasn't replaced, causing GPModel.optimize() not to work.

Authors

0.3.0

24 Dec 18:01
Compare
Choose a tag to compare

gptorch 0.3

Change log

0.3.0

Changes breaking backward compatibility:

  • GPR, VFE, SVGP: training inputs order is changed from (y, x) to (x, y) on
    model __init__()s.
  • .predict() functions return the same type as the inputs provided
    (numpy.ndarray->numpy.ndarray, torch.Tensor->torch.Tensor)
  • Remove util.as_variable()
  • Remove util.tensor_type()
  • Remove util.KL_Gaussian()
  • Remove util.gammaln()
  • GPModel method .loss() generally replaces .compute_loss().
  • .compute_loss() methods in models generally renamed to .log_likelihood()
    and signs flipped to reflect the fact that the loss is generally the negative
    LL.

Changes not breaking backward compatibility:

  • GPR, VFE: Allow specifying training set on .compute_loss() with x, y kwargs
  • GPR, VFE: Allow specifying training inputs on ._predict() with x kwarg
  • GPU supported with .cuda()
  • Remove GPModel.evaluate()
  • Don't print inducing inputs on sparse GP initialization
  • Suport for priors in gptorch.model.Models

Authors

0.2.3

03 Nov 16:01
Compare
Choose a tag to compare

gptorch 0.2

Change log

0.2.1

  • Add missing .predict_f_samples() method to GPModel
  • Add missing diag kwarg to GPModel.predict_f().

0.2.2

  • Remove instances of torch.set_default_dtype() from codebase
  • Add CircleCI and CodeCov.io

0.2.3

  • Fix gradient-shunting behavior caused by torch.clamp() used in util.squared_distance()
  • Replace deprecated uses of as_variable() in densities.py with as_tensor()

Authors

0.2.2

12 Oct 23:22
2486e37
Compare
Choose a tag to compare

gptorch 0.2

Change log

0.2.1

  • Add missing .predict_f_samples() method to GPModel
  • Add missing diag kwarg to GPModel.predict_f().

0.2.2

  • Remove instances of torch.set_default_dtype() from codebase
  • Add CircleCI and CodeCov.io

Authors

0.2.1

22 Jul 00:11
0fc5c94
Compare
Choose a tag to compare

Version 0.2.1

Change log

  • Add missing .predict_f_samples() method to GPModel
  • Add missing diag kwarg to GPModel.predict_f().

Authors

0.2.0

02 Jul 05:27
98a902d
Compare
Choose a tag to compare

Version 0.2.0

Change log

  • Support for PyTorch 1.0 up through the latest current release.
  • Can initialize sparse GP inducing inputs using centers of k-means clusters
  • Suppress warning messages by default when jitter is required for linear algebra operations

Authors

v0.1.0

16 Mar 19:27
a52f373
Compare
Choose a tag to compare

Version 0.1.0

Initial Release.
For use with Python 2 or 3, PyTorch version 0.3.1

Features

  • Models supported:
    • Vanilla GP regression
    • Sparse GP regression with variational inference of inducing points.
  • Kernels supported:
    • White
    • Constant
    • RBF
    • Matern 5/2
    • Matern 3/2
    • Exponential (Matern 1/2)
    • Linear
    • Sum kernels
    • Product kernels
  • Mean functions supported: zero only
  • Training: optimization using a variety of PyTorch and Scipy optimizers
  • Tests in place for most kernels, likelihoods, densities.

Contributors: