You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
.. [2]: Gelman, Andrew, J. B. Carlin, Hal S. Stern, David B. Dunson, Aki Vehtari, and Donald B. Rubin. (2013). Bayesian Data Analysis. Third Edition. Chapman; Hall/CRC.
109
-
.. [3]: Geyer, Charles J. (1992). “Practical Markov Chain Monte Carlo.” Statistical Science, 473–83.
110
-
.. [4]: Geyer, Charles J. (2011). “Introduction to Markov Chain Monte Carlo.” In Handbook of Markov Chain Monte Carlo, edited by Steve Brooks, Andrew Gelman, Galin L. Jones, and Xiao-Li Meng, 3–48. Chapman; Hall/CRC.
.. [2] Gelman, Andrew, J. B. Carlin, Hal S. Stern, David B. Dunson, Aki Vehtari, and Donald B. Rubin. (2013). Bayesian Data Analysis. Third Edition. Chapman; Hall/CRC.
109
+
.. [3] Geyer, Charles J. (1992). “Practical Markov Chain Monte Carlo.” Statistical Science, 473–83.
110
+
.. [4] Geyer, Charles J. (2011). “Introduction to Markov Chain Monte Carlo.” In Handbook of Markov Chain Monte Carlo, edited by Steve Brooks, Andrew Gelman, Galin L. Jones, and Xiao-Li Meng, 3–48. Chapman; Hall/CRC.
Copy file name to clipboardExpand all lines: blackjax/nuts.py
+24-23
Original file line number
Diff line number
Diff line change
@@ -67,13 +67,36 @@ def kernel(
67
67
) ->Callable:
68
68
"""Build an iterative NUTS kernel.
69
69
70
+
This algorithm is an iteration on the original NUTS algorithm [Hoffman2014]_ with two major differences:
71
+
- We do not use slice samplig but multinomial sampling for the proposal [Betancourt2017]_;
72
+
- The trajectory expansion is not recursive but iterative [Phan2019]_, [Lao2020]_.
73
+
74
+
The implementation can seem unusual for those familiar with similar
75
+
algorithms. Indeed, we do not conceptualize the trajectory construction as
76
+
building a tree. We feel that the tree lingo, inherited from the recursive
77
+
version, is unnecessarily complicated and hides the more general concepts
78
+
on which the NUTS algorithm is built.
79
+
80
+
NUTS, in essence, consists in sampling a trajectory by iteratively choosing
81
+
a direction at random and integrating in this direction a number of times
82
+
that doubles at every step. From this trajectory we continuously sample a
83
+
proposal. When the trajectory turns on itself or when we have reached the
84
+
maximum trajectory length we return the current proposal.
85
+
70
86
Parameters
71
87
----------
72
88
logprob_fb
73
89
Log probability function we wish to sample from.
74
90
parameters
75
91
A NamedTuple that contains the parameters of the kernel to be built.
76
92
93
+
References
94
+
----------
95
+
.. [Hoffman2014] Hoffman, Matthew D., and Andrew Gelman. "The No-U-Turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo." J. Mach. Learn. Res. 15.1 (2014): 1593-1623.
96
+
.. [Betancourt2017] Betancourt, Michael. "A conceptual introduction to Hamiltonian Monte Carlo." arXiv preprint arXiv:1701.02434 (2017).
97
+
.. [Phan2019] Phan, Du, Neeraj Pradhan, and Martin Jankowiak. "Composable effects for flexible and accelerated probabilistic programming in NumPyro." arXiv preprint arXiv:1912.11554 (2019).
98
+
.. [Lao2020] Lao, Junpeng, et al. "tfp. mcmc: Modern markov chain monte carlo tools built for modern hardware." arXiv preprint arXiv:2002.01184 (2020).
99
+
77
100
"""
78
101
79
102
defpotential_fn(x):
@@ -105,23 +128,7 @@ def iterative_nuts_proposal(
105
128
max_num_expansions: int=10,
106
129
divergence_threshold: float=1000,
107
130
) ->Callable:
108
-
"""Iterative NUTS algorithm.
109
-
110
-
This algorithm is an iteration on the original NUTS algorithm [1]_ with two major differences:
111
-
- We do not use slice samplig but multinomial sampling for the proposal [2]_;
112
-
- The trajectory expansion is not recursive but iterative [3,4]_.
113
-
114
-
The implementation can seem unusual for those familiar with similar
115
-
algorithms. Indeed, we do not conceptualize the trajectory construction as
116
-
building a tree. We feel that the tree lingo, inherited from the recursive
117
-
version, is unnecessarily complicated and hides the more general concepts
118
-
on which the NUTS algorithm is built.
119
-
120
-
NUTS, in essence, consists in sampling a trajectory by iteratively choosing
121
-
a direction at random and integrating in this direction a number of times
122
-
that doubles at every step. From this trajectory we continuously sample a
123
-
proposal. When the trajectory turns on itself or when we have reached the
124
-
maximum trajectory length we return the current proposal.
131
+
"""Iterative NUTS proposal.
125
132
126
133
Parameters
127
134
----------
@@ -142,12 +149,6 @@ def iterative_nuts_proposal(
142
149
-------
143
150
A kernel that generates a new chain state and information about the transition.
144
151
145
-
References
146
-
----------
147
-
.. [1]: Hoffman, Matthew D., and Andrew Gelman. "The No-U-Turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo." J. Mach. Learn. Res. 15.1 (2014): 1593-1623.
148
-
.. [2]: Betancourt, Michael. "A conceptual introduction to Hamiltonian Monte Carlo." arXiv preprint arXiv:1701.02434 (2017).
149
-
.. [3]: Phan, Du, Neeraj Pradhan, and Martin Jankowiak. "Composable effects for flexible and accelerated probabilistic programming in NumPyro." arXiv preprint arXiv:1912.11554 (2019).
150
-
.. [4]: Lao, Junpeng, et al. "tfp. mcmc: Modern markov chain monte carlo tools built for modern hardware." arXiv preprint arXiv:2002.01184 (2020).
0 commit comments