Skip to content

Comments

Fix 2948#2949

Merged
junpenglao merged 4 commits intopymc-devs:masterfrom
junpenglao:fix#2948
Apr 24, 2018
Merged

Fix 2948#2949
junpenglao merged 4 commits intopymc-devs:masterfrom
junpenglao:fix#2948

Conversation

@junpenglao
Copy link
Member

Reverse the logpt tensor computation back to pre #2499.
close #2948

Junpeng Lao added 2 commits April 23, 2018 21:44
Reverse the logpt tensor computation back to pre #2499.
close #2948
@kyleabeauchamp
Copy link
Contributor

kyleabeauchamp commented Apr 23, 2018

Maybe we should add in a test from #2948? E.g., ideally we want to end up in a place where tests guarantee that both #2948 and #2482 are resolved.

@junpenglao
Copy link
Member Author

This will not effect the fix for #2482.
I am still trying to find out why it only fail for Lognormal... if it really is an edge case I will then add a test accordingly.

@junpenglao
Copy link
Member Author

junpenglao commented Apr 23, 2018

So far this is really weird...

ndim = 3
with pm.Model() as model:
    sigma = pm.Lognormal('sigma', 
                         mu=np.zeros(ndim),
                         tau=np.ones(ndim), 
                         shape=ndim)  # variance for the correlation matrix
    nu = pm.HalfCauchy('nu', beta=10)

factors = [var.logpt for var in model.basic_RVs] + model.potentials
logp = tt.sum([tt.sum(logpt) for logpt in factors]) --> fail
logp = tt.add(*map(tt.sum, factors)) --> works
func = pm.model.ValueGradFunction(logp, model.basic_RVs)
func.set_extra_values(model.test_point)
q = np.array([2.30258509, 1., 1., 1.])
func(q)

And the error only appears using Lognormal when the tau or sd is multi dimensional...

Junpeng Lao added 2 commits April 23, 2018 22:38
interestingly, we can pick up the error using 
`func = ValueGradFunction(m.logpt, m.basic_RVs)`
but not
`func = ValueGradFunction(m.logpt, m.basic_RVs, mode='FAST_COMPILE')`
@junpenglao
Copy link
Member Author

junpenglao commented Apr 24, 2018

I will merge this if no more comment.

@ColCarroll
Copy link
Member

LGTM, though the previous code also looked good. Glad you added some tests!

@junpenglao junpenglao merged commit 4bad645 into pymc-devs:master Apr 24, 2018
@junpenglao junpenglao deleted the fix#2948 branch April 24, 2018 21:12
@ColCarroll
Copy link
Member

@junpenglao
Copy link
Member Author

Thanks for the heads up. I am a bit surprised that it effected the effective sample size - will report back.

ColCarroll pushed a commit that referenced this pull request May 7, 2018
* Optimized logpt computation

I change the logpt computation in #2949 to fix  #2948, however, it slows down the speed as some graph optimization is turned off (those optimization is originally cause the error in #2948). I am trying with a differen approach here.
@ColCarroll

* fix test
agustinaarroyuelo pushed a commit to agustinaarroyuelo/pymc3 that referenced this pull request Feb 8, 2019
* Fix 2948

Reverse the logpt tensor computation back to pre pymc-devs#2499.
close pymc-devs#2948

* fix mistake

* add test

interestingly, we can pick up the error using 
`func = ValueGradFunction(m.logpt, m.basic_RVs)`
but not
`func = ValueGradFunction(m.logpt, m.basic_RVs, mode='FAST_COMPILE')`

* fix test for float32
agustinaarroyuelo pushed a commit to agustinaarroyuelo/pymc3 that referenced this pull request Feb 8, 2019
* Optimized logpt computation

I change the logpt computation in pymc-devs#2949 to fix  pymc-devs#2948, however, it slows down the speed as some graph optimization is turned off (those optimization is originally cause the error in pymc-devs#2948). I am trying with a differen approach here.
@ColCarroll

* fix test
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Theano does not sample from HalfCauchy and Lognormal together

3 participants