-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactored Bound for v4 #4815
Refactored Bound for v4 #4815
Conversation
@kc611 How close do you think this is to merge? If there's still a ways to go, we might want to focus on the aeppl integration instead where we can get mixture almost for free. CC @brandonwillard |
I'd say we're halfway through over here. But if we're bringing in cc @ricardoV94 |
|
Codecov Report
@@ Coverage Diff @@
## main #4815 +/- ##
==========================================
+ Coverage 73.17% 73.61% +0.43%
==========================================
Files 86 86
Lines 13868 13840 -28
==========================================
+ Hits 10148 10188 +40
+ Misses 3720 3652 -68
|
0dea2c0
to
4cb2948
Compare
pymc3/distributions/bound.py
Outdated
lower = None | ||
if isinstance(upper, Real) and upper == np.inf: | ||
upper = None | ||
def __new__(cls, name, distribution, lower=None, upper=None, size=None, **kwargs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we are in a modelcontext
we should check here that the distribution
is not in model
(which would mean the user did not use .dist()
and the original variable would be accounted on the model logp as well.
In addition we can add the check that it is a TensorVariable, if not, the user is probably trying to work with the old API I see you already did this. Then the first error message can also go there
@@ -232,32 +133,70 @@ class Bound: | |||
.. code-block:: python | |||
|
|||
with pm.Model(): | |||
NegativeNormal = pm.Bound(pm.Normal, upper=0.0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we not support this use-case anymore?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, you have to pass a .dist(). There is a very informative error message with a code example if users do that.
This was done to simplify the implementation and also be more in line with the rest of the API (e.g when you specify distributions for Mixtures, LKJCorr priors, RWs?). Also in the future, Truncated and Censored distributions would work the same way.
I checked the original PRs for Bound and this was actually the initial intended API, but they couldn't make it work. It was not, but it came up a couple of times, e.g: #2277 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left some comments, We should wrap the lower / upper
with at.as_tensor_variable([floatX|intX])
, probably in the .dist
method of the private classes _[Discrete|Continuous]Bounded
, to not interfere with the cls.set_values
.
Also after #4867 that logic can probably be simplified, as we will be able to just return symbolic initvals on demand
498172e
to
cb5d558
Compare
Thanks @kc611 and @ricardoV94! |
This PR refactors the
Bound
distribution forv4