You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Changed Categorical to work with multidim p at the logp level (#3383)
* Changed Categorical to work with multidim p at the logp level.
* Fixed problems with OrderedLogistic.
* Use np.moveaxis instead of transposing. Also added some more tests.
Copy file name to clipboardExpand all lines: RELEASE-NOTES.md
+3
Original file line number
Diff line number
Diff line change
@@ -23,6 +23,9 @@
23
23
- Fixed incorrect usage of `broadcast_distribution_samples` in `DiscreteWeibull`.
24
24
-`Mixture`'s default dtype is now determined by `theano.config.floatX`.
25
25
-`dist_math.random_choice` now handles nd-arrays of category probabilities, and also handles sizes that are not `None`. Also removed unused `k` kwarg from `dist_math.random_choice`.
26
+
- Changed `Categorical.mode` to preserve all the dimensions of `p` except the last one, which encodes each category's probability.
27
+
- Changed initialization of `Categorical.p`. `p` is now normalized to sum to `1` inside `logp` and `random`, but not during initialization. This could hide negative values supplied to `p` as mentioned in #2082.
28
+
- To be able to test for negative `p` values supplied to `Categorical`, `Categorical.logp` was changed to check for `sum(self.p, axis=-1) == 1` only if `self.p` is not a `Number`, `np.ndarray`, `TensorConstant` or `SharedVariable`. These cases are automatically normalized to sum to `1`. The other condition may originate from a `step_method` proposal, where `self.p` tensor's value may be set, but must sum to 1 nevertheless. This may break old code which intialized `p` with a theano expression and relied on the default normalization to get it to sum to 1. `Categorical.logp` now also checks that the used `p` has values lower than 1.
0 commit comments