Skip to content

Upstream fetch#1

Merged
lucianopaz merged 77 commits intolucianopaz:masterfrom
pymc-devs:master
May 25, 2018
Merged

Upstream fetch#1
lucianopaz merged 77 commits intolucianopaz:masterfrom
pymc-devs:master

Conversation

@lucianopaz
Copy link
Owner

No description provided.

ColCarroll and others added 30 commits April 7, 2018 17:17
Update short and long descriptions in setup.py
* Removed redundant specification of version number

* Replaced pymc3 import in setup with regex function for getting version

* Removed typo
Fix for trace of subset of variables
* fix GARCH11: missing [] in tt.concatenate()

* add test for GARCH11 and update docstring to current standards

Closes #2938
* Added check_test_point function

* Added change to RELEASE-NOTES

* Fixed typo
* updated 'updating_priors' notebook

* updated 'api_quickstart' notebook

* updated 'AR' notebook

* updated 'Bayes_factor' notebook

* Added check_test_point method to Model (#2936)

* Added check_test_point function

* Added change to RELEASE-NOTES

* Fixed typo

* updated 'runing' typos to 'running' in these 4 notebooks
Hotfix. Increase learning rate seems to resolve it locally.
* Update arbitrary_stochastic.py

Move custom logp outside of model block to avoide `AttributeError: Can't pickle local object 'build_model.<locals>.logp'`

* Reduce divergence

- Change scale of prior
- increase tuning and target_accept
Note that there are still ~5 divergences per 1000 samples after tunning, further reparameterization is necessary.

* remove unnecessary comments

* Change prior for model

No more divergence with more informative priors.

* Use more realistic prior to avoid divergence

* Change init for sample

avoid `ValueError: Mass matrix contains zeros on the diagonal.`
* Fix 2948

Reverse the logpt tensor computation back to pre #2499.
close #2948

* fix mistake

* add test

interestingly, we can pick up the error using 
`func = ValueGradFunction(m.logpt, m.basic_RVs)`
but not
`func = ValueGradFunction(m.logpt, m.basic_RVs, mode='FAST_COMPILE')`

* fix test for float32
Currently, the default for InverseGamma(a, b) is the mean, but when a, b < 1 the mean is unspecify and the RV assign 0. as the default, which crash the sampler.
This PR change the default to the mode.
Junpeng Lao and others added 29 commits May 7, 2018 08:37
* Optimized logpt computation

I change the logpt computation in #2949 to fix  #2948, however, it slows down the speed as some graph optimization is turned off (those optimization is originally cause the error in #2948). I am trying with a differen approach here.
@ColCarroll

* fix test
Replaced sigma argument with noise in MarginalSparse.marginal_likelihood
We see user having a hard time debugging their model when the error `Mass matrix contains zeros on the diagonal. Some derivatives might always be zero` arise. See eg https://discourse.pymc.io/t/unsupervised-clustering-mass-matrix-contains-zeros-on-the-diagonal/1222/.

This PR prints out where the error is, so user can easier address the bug by eg changing the scale of those RVs
the index is after .ravel()
Better error message for Mass matrix contains zeros
This small correction is in response to issue Stochastic Volatility Example #2566.  It concerns the
two different examples of the Volatility Model used in the PyMC3 notebooks to introduce users to the
wonder that is Bayesian modeling.  In getting_started.ipynb, under:
--Case Study 1: Stochastic volatility
---The Model
the model specification uses `y_i` to represent the daily precentage returns.  However, later on in
the `with pm.Model() as sp500_model:` block the dummy variable `r` is used to represent the daily
returns as well as the tensor variable name.

The second correction also concerns the useage of `s_i` to represent the volatility process in daily
returns.  In both Volatility Process walkthroughs (there is a stand-alone notebook dedicated to it)
the model specs treat `s_i` as the standard deviation of the StudentT distribution used to model the
log-returns.  In the PyMC3 API docs on the StudentT, the distribution is defined with `lambda`
representing the precision.  This is why the `volatility_process` variable is mapped from
`pm.math.exp(-2 * s)` in the `pm.Model()` block.  However, when the returns were defined in the
model block the **kwarg was `lambda=1/volatility_process`.  This has been fixed.

Thanks to @twiecki and the OP for highlighting this error.
DOCS: 2973 Fix to doctring based on new behaviour in `distribution.de…
Fixing up the test and implementation

Adding other draw_values

Small test fix
* Fixes to Rice test.

* make test case for Rice distribution pass

* Improve numerical stability of rice distribution function for large values of nu and value
Use ((x-b)**2)/2 + xb instead of (x**2 + b**2)/2 in the pdf for the rice distribution and include the np.exp(-xb) in the i0e to match the scipy implementation

* Change test domain for sd of rice distribution to pass tests with float32
* Add save/load ndarray to release notes, small bug fix

* Be careful about deleting things
So detecting transform name is more consistence.
Might not be backward compatible if users are accessing the transformed RV of `cov_cholesky_cov_pack__` in custom code.
Also rerun notebook and add additional explanation regarding to changes of variables.
@lucianopaz lucianopaz merged commit 6aedaa9 into lucianopaz:master May 25, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Comments