-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Numerical differences between 3.11.4 and 4.0.0b #5443
Comments
This is concerning. @ricardoV94 any ideas where this could stem from? |
It seems to have something to do with the variable transform. This provides the same results in V3 and V4: import numpy as np
import pymc as pm
data = np.repeat((0, 1), (3, 6))
with pm.Model() as m:
p = pm.Uniform("p", 0, 1, transform=None) # uniform priors
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum()) # binomial likelihood
m.compile_d2logp(vars=[p], jacobian=False)({'p': 0.66})
# array([[39.72566178]]) # V3 prints array([[39.72566178]]) But with the default transform we get nonsensical (?) results: data = np.repeat((0, 1), (3, 6))
with pm.Model() as m:
p = pm.Uniform("p", 0, 1) # uniform priors
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum()) # binomial likelihood
m.compile_d2logp(vars=[p], jacobian=False)({'p_interval__': 0.69})
# array([[2.00209481]]) # V3 prints array([[40.41538223]]) |
So the difference is that in V3, the hessian is computed in terms of data = np.repeat((0, 1), (3, 6))
with pm.Model() as m:
p = pm.Uniform("p", 0, 1) # uniform priors
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum()) # binomial likelihood
mean_q = pm.find_MAP()
std_q1 = ((1 / pm.find_hessian(mean_q, vars=[m["p"]])) ** 0.5)[0]
std_q2 = ((1 / pm.find_hessian(mean_q, vars=[m["p_interval__"]])) ** 0.5)[0]
print(mean_q) # {'p_interval__': array(0.69314718), 'p': array(0.66666667)}
print(std_q1, std_q2) # [0.15713484] [0.63960215] |
AFAICT this is not possible in V4 because what in V3 was Having said that, V4 behavior is internally consistent, as the value variable of |
Is this fixed by #5444? |
Nope, it's unrelated |
If you wanted to do this right now you could remove the transform after calling import numpy as np
import pymc as pm
data = np.repeat((0, 1), (3, 6))
with pm.Model() as m:
p = pm.Uniform("p", 0, 1) # uniform priors
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum()) # binomial likelihood
mean_q = pm.find_MAP()
# Remove transform from the variable `p`
m.rvs_to_transforms[p] = None
# Change name so that we can use `mean_q["p"]` value
p_value = m.rvs_to_values[p]
p_value.name = p.name
std_q = ((1 / pm.find_hessian(mean_q, vars=[p])) ** 0.5)[0]
# display summary of quadratic approximation
print(" Mean, Standard deviation\np {:.2}, {:.2}".format(mean_q["p"], std_q[0]))
# Mean, Standard deviation
# p 0.67, 0.16 This is another argument for binding transforms less strongly to the model variables. For instance by keeping them in a dictionary of In the future, we might be able to make transforms explicit part of the RandomVariable graph (see aesara-devs/aeppl#119), but it will always be a bit tricky to make a clear API to say:
Specially because users are often unaware of the transformations, and most of the times it's unwise to use the untransformed space (case in point, I would say the original example was a happy accident, where we choose the right representation for |
Hi @ricardoV94, is there any update for this one. It seems that I face a similar issue of The detail is here: pymc-devs/pymc-examples#73 (comment) Thanks |
Hi @ricardoV94, I tired your code, but it popped up an error, how to fix this? I'm using PyMC 5.1 and macOS M1. 9 mean_q = pm.find_MAP()
11 # Remove transform from the variable `p`
---> 12 normal_approximation.rvs_to_tranforms[p] = None
14 # Change name so that we can use `mean_q["p"]` value
15 p_value = normal_approximation.rvs_to_values[p]
AttributeError: 'Model' object has no attribute 'rvs_to_tranforms'
|
There was a typo, should be |
Thanks, surprised that I didn't find that. |
I am closing this via #6879. There is a new functionality |
Numerical problems with quadratic approximation
As per @ericmjl (pymc-devs/pymc-resources#168 (comment)), pymc 3.11.4 and 4.0.0b are giving different numerical answers to quadratic approximation examples for Chapter 2 in the Rethinking Statistics course. Below is an excerpt from the Chapter 2 example.
pymc 3.11.4 gives standard deviation of 0.16 (correct). 4.0.0b gives 0.64. No other warnings or errors were apparent.
Please provide a minimal, self-contained, and reproducible example.
Expected Output
Actual Output
Please provide any additional information below.
Versions and main components
The text was updated successfully, but these errors were encountered: