-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
BUG: HurdleGamma results in large number of divergences, even under the correct model #7630
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I suspect some instability in the truncation logp or the gradient |
Let me know if this isn't helpful. I've been reading Some Mixture Modeling Basics by Michael Betancourt which have informed the contents of this comment. Hurdle models are currently being implemented as a mixture of a Dirac and the chosen density. As it stands, the pm.DiracDelta.logp(value = 0.0, c=0.0).eval()
#array(0., dtype=float32)
pm.DiracDelta.logp(value = 1.0, c=0.0).eval()
#array(-inf, dtype=float32) If there was a continuous analog of Would using |
I believe this can be considered closed by #7810. |
Example provided above seems to run. Closing |
Describe the issue:
A simple
HurdleGamma
experiences a very high number of divergences, even when priors are tightly centered around true values and the data generating process is correct.Some chains get "stuck" -- they do not move from their initialized values.
For more, please see this thread in the PyMC community forums
Reproduceable code example:
Error message:
No response
PyMC version information:
5.19.1
Context for the issue:
No response
The text was updated successfully, but these errors were encountered: