Add FixedDistribution helper #6275
ricardoV94
started this conversation in
Ideas
Replies: 1 comment
-
Another use case: https://discourse.pymc.io/t/can-pm-draw-be-used-inside-a-pymc-model-or-will-it-break-links-between-rvs-and-observed-likelihood-variable/12069 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Some users on discourse have come up with models that rely on fixed distributions (i.e., RVs that should not be treated as standard variables that you want to infer):
https://discourse.pymc.io/t/custom-sampling-step/7243
https://discourse.pymc.io/t/two-stage-bayesian-regression-enforcing-a-fixed-distribution-not-just-hierarchical-regression/10036
https://discourse.pymc.io/t/using-posterior-as-likelihood/10742 (this one is not as clear but in DM with the user it became clear she wanted to do something similar)
In theory one could simply use
.dist
:Right now we simply error out if we find RVs in the logp graph (except for SimulatorRVs)
We could easily remove that check... but I don't think NUTS would appreciate that (@ColCarroll, @junpenglao, @aseyboldt?). Instead, what sounds safer is to use a different, but fixed, draw of
fixed_normal_dist
in every NUTS step. In that case, what I have been proposing is some hack like this:Does it make sense to add a helper
pm.FixedDist(name, rv)
that does something similar behind the scenes? It could obviously go intopymc_experimental
first, but it's easier to bring up the discussion here.Count the logp of the fixed dist or not?
Also it's still a bit fuzzy for me if we always/never want to consider the FixedDistribution logp in the rest of model, or if in some cases we do and others we don't... We could of course, give the choice to the user...
If we wanted to count the example above, would look like:
Beta Was this translation helpful? Give feedback.
All reactions