-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory Error: OrderedLogistic #3535
Comments
I think the basic example provided in
In the older versions, the value of ELBO seems to converge to a reasonable value ~ 10^2, but in the recent commits, the progress bar shows large value ~ 10 ^5. I suspect this is caused by
will be a length
will be a |
Thanks @tohtsky, I think your diagnosis is right. |
@aloctavodia, sorry for the delay in taking a look at this. I'm still mostly offline until next week. I looked a bit more into the commit you referenced and I think that there may be a bug in a = tt.log(p[..., value_clip]) I'll be able to test this in a bit less than two weeks, but if you want to try it out first, you're welcome to do so. |
Oops, I skipped over @tohtsky's answer that points to exactly the same line of code that I thought was causing the bug in my previous answer. Maybe we could try with the ellipses in the indexing instead of doing a |
I tried replacing the line you suggested and I still see the same problem :-( |
Simply flattening multi-dimensional tensor into a 2d array and then reshaping the resulting logp vector back into the original tensor shape should work?
|
I'll have time to look into this issue this Friday. |
Hi, |
I managed to write a small test involving import numpy as np
from scipy.special import logit
import pymc3 as pm
loge = np.log10(np.exp(1))
size = 100
p = np.ones(10) / 10
cutpoints = logit(np.linspace(0, 1, 11)[1:-1])
obs = np.random.randint(0, 1, size=size)
with pm.Model():
ol = pm.OrderedLogistic("ol", eta=0, cutpoints=cutpoints, observed=obs)
c = pm.Categorical("c", p=p, observed=obs)
print(ol.logp({"ol": 1}) * loge)
print(c.logp({"c": 1}) * loge) The Furthermore, >>> ol.distribution.p.ndim
2
>>> c.distribution.p.ndim
1 However, there is also an additional |
In the end, it wasn't a broadcasting problem. It was an indexing problem when |
* Added tests for issue * Fix for #3535 * Added release notes
|
@seberg was kind enough to point me to |
Hi @lucianopaz, data = np.random.randint(0, 3, size=(1000, 1))
with pm.Model() as model:
tp1 = pm.Dirichlet('tp1', a=np.array([0.25]*4), shape=(4,)) #4 Free RV
obs = pm.Categorical('obs', p=tp1, observed=data)
trace = pm.sample() #super fast!
data_indexer = np.random.randint(0,2,size=(1000,))
with pm.Model() as model:
tp1 = pm.Dirichlet('tp1', a=np.array([0.25]*4), shape=(2,4)) #8 Free RV
obs = pm.Categorical('obs', p=tp1[data_indexer, :], observed=data)
trace = pm.sample() #takes ages! Does the second model sample ok for you (just incase i've done something silly with my install)? |
@bdyetton, it samples super slow for me too. I'll try to find out why this is happening. |
Great thanks !!!
…On Wed, Jul 31, 2019 at 11:53 AM Luciano Paz ***@***.***> wrote:
@bdyetton <https://github.com/bdyetton>, it samples super slow for me
too. I'll try to find out why this is happening.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#3535>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABHSEKDXMKRAWOPZIQBN7ALQCFOJ5ANCNFSM4H4K2ZNQ>
.
--
Regards,
*Ben Yetton*
MA Cog Psych,
BSc (Robotics) w. first class honors,
Graduate Student,
Mednick Sleep and Cognition Lab,
University Of California, Irvine
[email protected] <[email protected]>
|
@lucianopaz I'm not sure if this is helpful at all, but the second model above will not begin sampling with the slice step method, so this is not an issue affecting NUTS only. |
Hi @lucianopaz, any progress? Anything I can do to help? |
@bdyetton, sorry, I have deal with other stuff from work first. Once I finish, I'll be able to look into this more deeply. |
@lucianopaz, Thanks!!! |
I am updating and re-running the statistical rethinking notebooks. I get a memory allocation error with model m_11 (code block 11.5). The problem seems to be related with #3383, reverting changes in categorical distribution previous to that PR, fix the issue. @lucianopaz probably you have a better idea of what is going on here.
The text was updated successfully, but these errors were encountered: