Skip to content

Comments

Add logp_nojac and logp_sum#2499

Merged
junpenglao merged 2 commits intopymc-devs:masterfrom
aseyboldt:no-jac-map
Aug 16, 2017
Merged

Add logp_nojac and logp_sum#2499
junpenglao merged 2 commits intopymc-devs:masterfrom
aseyboldt:no-jac-map

Conversation

@aseyboldt
Copy link
Member

This is WIP for fixing #2482.
It adds Distribution.logp_nojac, which returns the logp, but doesn't include the jacobian terms for transformed distributions. find_MAP now also uses this instead of the normal logp function.

It also adds Distribution.logp_sum, which isn't used at the moment, but can be used to speed up some logp functions, for cases where we only need the sum of the logp values and not the individual values (nuts!). I experimented with that for Normal and MvNormal at it seems this can provide a moderate speedup in some cases.

I think this should probably wait until #2468 is merged, and then rebased. @kyleabeauchamp wrote some tests for this, but they require #2468, which we can also add after that is merged.

@aseyboldt aseyboldt added the WIP label Aug 12, 2017
@aseyboldt aseyboldt mentioned this pull request Aug 12, 2017
@kyleabeauchamp
Copy link
Contributor

Or we could just merge this guy first, which would then make the test in (1) pass after I merge upstream again.

@kyleabeauchamp
Copy link
Contributor

kyleabeauchamp commented Aug 13, 2017 via email

pymc3/model.py Outdated
logp_factors = tt.sum(factors)
logp_potentials = tt.sum([tt.sum(pot) for pot in self.potentials])
logp = logp_factors + logp_potentials
logp.name = '__logp'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please prepend self.name to logp in this pr?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@kyleabeauchamp
Copy link
Contributor

FWIW, in a personal branch I've merged this with the other MLE updates and I can confirm that my new test is passing 👍

self.logp_elemwiset = distribution.logp(self)
# The logp might need scaling in minibatches.
# This is done in `Factor`.
self.logp_sum_unscaledt = distribution.logp_sum(self)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

scaledt?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The 't' stands for "tensor". I thought I'd use the same naming convention as in model.logpt vs model.logp.

@twiecki
Copy link
Member

twiecki commented Aug 14, 2017

LGTM.

@kyleabeauchamp
Copy link
Contributor

Is this good to merge or still WIP?

@aseyboldt
Copy link
Member Author

I'd say this is ready

@junpenglao
Copy link
Member

Cool stuff @aseyboldt, thanks!
Merging...

@junpenglao junpenglao merged commit 866c482 into pymc-devs:master Aug 16, 2017
@junpenglao junpenglao mentioned this pull request Apr 23, 2018
junpenglao pushed a commit that referenced this pull request Apr 24, 2018
* Fix 2948

Reverse the logpt tensor computation back to pre #2499.
close #2948

* fix mistake

* add test

interestingly, we can pick up the error using 
`func = ValueGradFunction(m.logpt, m.basic_RVs)`
but not
`func = ValueGradFunction(m.logpt, m.basic_RVs, mode='FAST_COMPILE')`

* fix test for float32
agustinaarroyuelo pushed a commit to agustinaarroyuelo/pymc3 that referenced this pull request Feb 8, 2019
* Fix 2948

Reverse the logpt tensor computation back to pre pymc-devs#2499.
close pymc-devs#2948

* fix mistake

* add test

interestingly, we can pick up the error using 
`func = ValueGradFunction(m.logpt, m.basic_RVs)`
but not
`func = ValueGradFunction(m.logpt, m.basic_RVs, mode='FAST_COMPILE')`

* fix test for float32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants