Open
Description
Reported via Julia Slack: https://julialang.slack.com/archives/CCYDC34A0/p1738281740089319
using DynamicPPL, Distributions
@model function inner()
x ~ Normal()
end
@model function outer1(nms)
for i in 1:2
a ~ to_submodel(prefix(inner(), nms[i]), false)
end
end
m1 = outer1(["1","2"])
@model function outer2(nms)
for i in 1:2
@submodel prefix = nms[i] inner()
end
end
m2 = outer2(["1", "2"])
@code_warntype m1.f(m1, VarInfo(m1), SamplingContext(), m1.args...)
@code_warntype m2.f(m2, VarInfo(m2), SamplingContext(), m2.args...)
@code_warntype DynamicPPL.evaluate!!(m1, VarInfo(m1), SamplingContext())
@code_warntype DynamicPPL.evaluate!!(m2, VarInfo(m2), SamplingContext())
There is a fairly small (10%) performance regression:
julia> using Chairmarks
julia> @be m1.f(m1, v1, SamplingContext(), m1.args...) # to_submodel
Benchmark: 3273 samples with 9 evaluations
min 2.759 μs (42 allocs: 2.812 KiB)
median 2.866 μs (42 allocs: 2.812 KiB)
mean 3.158 μs (42 allocs: 2.812 KiB, 0.03% gc time)
max 805.736 μs (42 allocs: 2.812 KiB, 98.86% gc time)
julia> @be m2.f(m2, v2, SamplingContext(), m2.args...) # @submodel
Benchmark: 7396 samples with 4 evaluations
min 2.510 μs (41 allocs: 2.656 KiB)
median 2.615 μs (41 allocs: 2.656 KiB)
mean 2.844 μs (41 allocs: 2.656 KiB, 0.01% gc time)
max 1.465 ms (41 allocs: 2.656 KiB, 98.93% gc time)
julia> @be DynamicPPL.evaluate!!(m1, v1, SamplingContext()) # to_submodel
Benchmark: 3117 samples with 10 evaluations
min 2.633 μs (41 allocs: 2.750 KiB)
median 2.750 μs (41 allocs: 2.750 KiB)
mean 2.982 μs (41 allocs: 2.750 KiB, 0.03% gc time)
max 636.521 μs (41 allocs: 2.750 KiB, 98.77% gc time)
julia> @be DynamicPPL.evaluate!!(m2, v2, SamplingContext()) # @submodel
Benchmark: 2848 samples with 12 evaluations
min 2.392 μs (40 allocs: 2.594 KiB)
median 2.512 μs (40 allocs: 2.594 KiB)
mean 2.723 μs (40 allocs: 2.594 KiB, 0.03% gc time)
max 540.396 μs (40 allocs: 2.594 KiB, 98.44% gc time)
Metadata
Metadata
Assignees
Labels
No labels