Skip to content

Commit 14e044d

Browse files
authored
re-execute blackbox numpy notebook (#496)
* re-execute blackbox numpy notebook * simplify and fix final comparison
1 parent cccc98e commit 14e044d

File tree

2 files changed

+128
-233
lines changed

2 files changed

+128
-233
lines changed

examples/case_studies/blackbox_external_likelihood_numpy.ipynb

+120-202
Large diffs are not rendered by default.

examples/case_studies/blackbox_external_likelihood_numpy.myst.md

+8-31
Original file line numberDiff line numberDiff line change
@@ -368,7 +368,7 @@ with pm.Model() as pymodel:
368368
theta = pt.as_tensor_variable([m, c])
369369
370370
# use a Normal distribution
371-
pm.Normal("likelihood", mu=(m * x + c), sd=sigma, observed=data)
371+
y = pm.Normal("likelihood", mu=(m * x + c), sigma=sigma, observed=data)
372372
373373
idata = pm.sample()
374374
@@ -412,38 +412,14 @@ az.plot_pair(idata, **pair_kwargs, ax=ax);
412412
We can now check that the gradient Op works as expected. First, just create and call the `LogLikeGrad` class, which should return the gradient directly (note that we have to create a [PyTensor function](http://deeplearning.net/software/pytensor/library/compile/function.html) to convert the output of the Op to an array). Secondly, we call the gradient from `LogLikeWithGrad` by using the [PyTensor tensor gradient](http://deeplearning.net/software/pytensor/library/gradient.html#pytensor.gradient.grad) function. Finally, we will check the gradient returned by the PyMC model for a Normal distribution, which should be the same as the log-likelihood function we defined. In all cases we evaluate the gradients at the true values of the model function (the straight line) that was created.
413413

414414
```{code-cell} ipython3
415-
# test the gradient Op by direct call
416-
pytensor.config.compute_test_value = "ignore"
417-
pytensor.config.exception_verbosity = "high"
415+
ip = pymodel.initial_point()
416+
print(f"Evaluating dlogp of model at point\n {ip}")
418417
419-
var = pt.dvector()
420-
test_grad_op = LogLikeGrad(data, x, sigma)
421-
test_grad_op_func = pytensor.function([var], test_grad_op(var))
422-
grad_vals = test_grad_op_func([mtrue, ctrue])
418+
grad_vals_custom = opmodel.compile_dlogp()(ip)
419+
grad_vals_pymc = pymodel.compile_dlogp()(ip)
423420
424-
print(f'Gradient returned by "LogLikeGrad": {grad_vals}')
425-
426-
# test the gradient called through LogLikeWithGrad
427-
test_gradded_op = LogLikeWithGrad(my_loglike, data, x, sigma)
428-
test_gradded_op_grad = pt.grad(test_gradded_op(var), var)
429-
test_gradded_op_grad_func = pytensor.function([var], test_gradded_op_grad)
430-
grad_vals_2 = test_gradded_op_grad_func([mtrue, ctrue])
431-
432-
print(f'Gradient returned by "LogLikeWithGrad": {grad_vals_2}')
433-
434-
# test the gradient that PyMC uses for the Normal log likelihood
435-
test_model = pm.Model()
436-
with test_model:
437-
m = pm.Uniform("m", lower=-10.0, upper=10.0)
438-
c = pm.Uniform("c", lower=-10.0, upper=10.0)
439-
440-
pm.Normal("likelihood", mu=(m * x + c), sigma=sigma, observed=data)
441-
442-
gradfunc = test_model.logp_dlogp_function([m, c], dtype=None)
443-
gradfunc.set_extra_values({"m_interval__": mtrue, "c_interval__": ctrue})
444-
grad_vals_pymc = gradfunc(np.array([mtrue, ctrue]))[1] # get dlogp values
445-
446-
print(f'Gradient returned by PyMC "Normal" distribution: {grad_vals_pymc}')
421+
print(f'\nGradient of model using a custom "LogLikeWithGrad":\n {grad_vals_custom}')
422+
print(f'Gradient of model using a PyMC "Normal" distribution:\n {grad_vals_pymc}')
447423
```
448424

449425
We could also do some profiling to compare performance between implementations. The {ref}`profiling` notebook shows how to do it.
@@ -454,6 +430,7 @@ We could also do some profiling to compare performance between implementations.
454430

455431
* Adapted from [Jørgen Midtbø](https://github.com/jorgenem/)'s [example](https://discourse.pymc.io/t/connecting-pymc-to-external-code-help-with-understanding-pytensor-custom-ops/670) by Matt Pitkin both as a [blogpost](http://mattpitkin.github.io/samplers-demo/pages/pymc-blackbox-likelihood/) and as an example notebook to this gallery in August, 2018 ([pymc#3169](https://github.com/pymc-devs/pymc/pull/3169) and [pymc#3177](https://github.com/pymc-devs/pymc/pull/3177))
456432
* Updated by [Oriol Abril](https://github.com/OriolAbril) on December 2021 to drop the Cython dependency from the original notebook and use numpy instead ([pymc-examples#28](https://github.com/pymc-devs/pymc-examples/pull/28))
433+
* Re-executed by Oriol Abril with pymc 5.0.0 ([pymc-examples#496](https://github.com/pymc-devs/pymc-examples/pull/496))
457434

458435
+++
459436

0 commit comments

Comments
 (0)