Skip to content

Commit 795cb7e

Browse files
authored
Turing 0.37 (#589)
* Update to Turing 0.37 * Update .~ syntax * Cache _freeze and .julia even if build fails (#591) * Update to Turing 0.37 * Fixes for Turing 0.37 compatibility
1 parent 8bc63e4 commit 795cb7e

File tree

10 files changed

+375
-372
lines changed

10 files changed

+375
-372
lines changed

Manifest.toml

+339-353
Large diffs are not rendered by default.

Project.toml

+1-1
Original file line numberDiff line numberDiff line change
@@ -53,4 +53,4 @@ UnPack = "3a884ed6-31ef-47d7-9d2a-63182c4928ed"
5353
Zygote = "e88e6eb3-aa80-5325-afca-941959d7151f"
5454

5555
[compat]
56-
Turing = "0.36.2"
56+
Turing = "0.37"

_quarto.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ website:
3232
text: Team
3333
right:
3434
# Current version
35-
- text: "v0.36"
35+
- text: "v0.37"
3636
menu:
3737
- text: Changelog
3838
href: https://turinglang.org/docs/changelog.html

core-functionality/index.qmd

+8-2
Original file line numberDiff line numberDiff line change
@@ -475,7 +475,10 @@ Example usage:
475475
k = length(unique(g))
476476
a ~ filldist(Exponential(), k) # = Product(fill(Exponential(), k))
477477
mu = a[g]
478-
return x .~ Normal.(mu)
478+
for i in eachindex(x)
479+
x[i] ~ Normal(mu[i])
480+
end
481+
return mu
479482
end
480483
```
481484

@@ -491,7 +494,10 @@ Example usage:
491494
k = length(unique(g))
492495
a ~ arraydist([Exponential(i) for i in 1:k])
493496
mu = a[g]
494-
return x .~ Normal.(mu)
497+
for i in eachindex(x)
498+
x[i] ~ Normal(mu[i])
499+
end
500+
return mu
495501
end
496502
```
497503

developers/compiler/model-manual/index.qmd

+14-4
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,9 @@ using Turing
1616
m ~ Normal(0, sqrt(s²))
1717
1818
# Observe each value of x.
19-
@. x ~ Normal(m, sqrt(s²))
19+
x .~ Normal(m, sqrt(s²))
20+
21+
return nothing
2022
end
2123
2224
model = gdemo([1.5, 2.0])
@@ -28,6 +30,8 @@ However, models can be constructed by hand without the use of a macro.
2830
Taking the `gdemo` model above as an example, the macro-based definition can be implemented also (a bit less generally) with the macro-free version
2931

3032
```{julia}
33+
using DynamicPPL
34+
3135
# Create the model function.
3236
function gdemo2(model, varinfo, context, x)
3337
# Assume s² has an InverseGamma distribution.
@@ -41,9 +45,15 @@ function gdemo2(model, varinfo, context, x)
4145
)
4246
4347
# Observe each value of x[i] according to a Normal distribution.
44-
return DynamicPPL.dot_tilde_observe!!(
45-
context, Normal(m, sqrt(s²)), x, Turing.@varname(x), varinfo
46-
)
48+
for i in eachindex(x)
49+
_retval, varinfo = DynamicPPL.tilde_observe!!(
50+
context, Normal(m, sqrt(s²)), x[i], Turing.@varname(x[i]), varinfo
51+
)
52+
end
53+
54+
# The final return statement should comprise both the original return
55+
# value and the updated varinfo.
56+
return nothing, varinfo
4757
end
4858
gdemo2(x) = Turing.Model(gdemo2, (; x))
4959

tutorials/gaussian-mixture-models/index.qmd

+6-4
Original file line numberDiff line numberDiff line change
@@ -167,10 +167,12 @@ One solution here is to enforce an ordering on our $\mu$ vector, requiring $\mu_
167167
`Bijectors.jl` [provides](https://turinglang.org/Bijectors.jl/dev/transforms/#Bijectors.OrderedBijector) an easy transformation (`ordered()`) for this purpose:
168168

169169
```{julia}
170+
using Bijectors: ordered
171+
170172
@model function gaussian_mixture_model_ordered(x)
171173
# Draw the parameters for each of the K=2 clusters from a standard normal distribution.
172174
K = 2
173-
μ ~ Bijectors.ordered(MvNormal(Zeros(K), I))
175+
μ ~ ordered(MvNormal(Zeros(K), I))
174176
# Draw the weights for the K clusters from a Dirichlet distribution with parameters αₖ = 1.
175177
w ~ Dirichlet(K, 1.0)
176178
# Alternatively, one could use a fixed set of weights.
@@ -285,7 +287,7 @@ using LogExpFunctions
285287
@model function gmm_marginalized(x)
286288
K = 2
287289
D, N = size(x)
288-
μ ~ Bijectors.ordered(MvNormal(Zeros(K), I))
290+
μ ~ ordered(MvNormal(Zeros(K), I))
289291
w ~ Dirichlet(K, 1.0)
290292
dists = [MvNormal(Fill(μₖ, D), I) for μₖ in μ]
291293
for i in 1:N
@@ -325,7 +327,7 @@ The `logpdf` implementation for a `MixtureModel` distribution is exactly the mar
325327
@model function gmm_marginalized(x)
326328
K = 2
327329
D, _ = size(x)
328-
μ ~ Bijectors.ordered(MvNormal(Zeros(K), I))
330+
μ ~ ordered(MvNormal(Zeros(K), I))
329331
w ~ Dirichlet(K, 1.0)
330332
x ~ MixtureModel([MvNormal(Fill(μₖ, D), I) for μₖ in μ], w)
331333
end
@@ -381,7 +383,7 @@ end
381383
@model function gmm_recover(x)
382384
K = 2
383385
D, N = size(x)
384-
μ ~ Bijectors.ordered(MvNormal(Zeros(K), I))
386+
μ ~ ordered(MvNormal(Zeros(K), I))
385387
w ~ Dirichlet(K, 1.0)
386388
dists = [MvNormal(Fill(μₖ, D), I) for μₖ in μ]
387389
x ~ MixtureModel(dists, w)

tutorials/hidden-markov-models/index.qmd

+1-1
Original file line numberDiff line numberDiff line change
@@ -96,7 +96,7 @@ The priors on our transition matrix are noninformative, using `T[i] ~ Dirichlet(
9696
N = length(y)
9797
9898
# State sequence.
99-
s = tzeros(Int, N)
99+
s = zeros(Int, N)
100100
101101
# Emission matrix.
102102
m = Vector(undef, K)

tutorials/infinite-mixture-models/index.qmd

+2-2
Original file line numberDiff line numberDiff line change
@@ -188,10 +188,10 @@ In Turing we can implement an infinite Gaussian mixture model using the Chinese
188188
H = Normal(μ0, σ0)
189189
190190
# Latent assignment.
191-
z = tzeros(Int, length(x))
191+
z = zeros(Int, length(x))
192192
193193
# Locations of the infinitely many clusters.
194-
μ = tzeros(Float64, 0)
194+
μ = zeros(Float64, 0)
195195
196196
for i in 1:length(x)
197197

tutorials/variational-inference/index.qmd

+1
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,7 @@ We first import the packages to be used:
3636
using Random
3737
using Turing
3838
using Turing: Variational
39+
using Bijectors: bijector
3940
using StatsPlots, Measures
4041
4142
Random.seed!(42);

usage/probability-interface/index.qmd

+2-4
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ Let's use a simple model of normally-distributed data as an example.
1818

1919
```{julia}
2020
using Turing
21-
using LinearAlgebra: I
21+
using DynamicPPL
2222
using Random
2323
2424
@model function gdemo(n)
@@ -98,11 +98,9 @@ logjoint(model, sample)
9898
```
9999

100100
For models with many variables `rand(model)` can be prohibitively slow since it returns a `NamedTuple` of samples from the prior distribution of the unconditioned variables.
101-
We recommend working with samples of type `DataStructures.OrderedDict` in this case:
101+
We recommend working with samples of type `DataStructures.OrderedDict` in this case (which Turing re-exports, so can be used directly):
102102

103103
```{julia}
104-
using DataStructures: OrderedDict
105-
106104
Random.seed!(124)
107105
sample_dict = rand(OrderedDict, model)
108106
```

0 commit comments

Comments
 (0)