Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Markov chain Monte Carlo
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Raftery-Lewis Diagnostics === The Raftery-Lewis diagnostic is specifically designed to assess how many iterations are needed to estimate quantiles or tail probabilities of the target distribution with a desired accuracy and confidence.<ref>{{Cite journal |last1=Raftery |first1=Adrian E. |last2=Lewis |first2=Steven M. |date=1992-11-01 |title=[Practical Markov Chain Monte Carlo]: Comment: One Long Run with Diagnostics: Implementation Strategies for Markov Chain Monte Carlo |url=https://projecteuclid.org/journals/statistical-science/volume-7/issue-4/Practical-Markov-Chain-Monte-Carlo--Comment--One-Long/10.1214/ss/1177011143.full |journal=Statistical Science |volume=7 |issue=4 |doi=10.1214/ss/1177011143 |issn=0883-4237}}</ref> Unlike Gelman-Rubin or Geweke diagnostics, which are based on assessing convergence to the entire distribution, the Raftery-Lewis diagnostic is goal-oriented as it provides estimates for the number of samples required to estimate a specific quantile of interest within a desired margin of error. Let <math>q</math> denote the desired quantile (e.g., 0.025) of a real-valued function <math>g(X)</math>: in other words, the goal is to find <math>u</math> such that <math>P(g(X) \leq u) = q</math>. Suppose we wish to estimate this quantile such that the estimate falls within margin <math>\varepsilon</math> of the true value with probability <math>1 - \alpha</math>. That is, we want :<math>P(|\hat{q} - q| < \varepsilon) \geq 1 - \alpha</math> The diagnostic proceeds by converting the output of the MCMC chain into a binary sequence: :<math> W_n = \mathbb{I}(g(X_n) \leq u), \;\;\; n=1,2,\dots </math> where <math>I(\cdot)</math> is the indicator function. The sequence <math>\{W_n\}</math> is treated as a realization from a two-state Markov chain. While this may not be strictly true, it is often a good approximation in practice. From the empirical transitions in the binary sequence, the Raftery-Lewis method estimates: * The minimum number of iterations <math>n_{\text{min}}</math> required to achieve the desired precision and confidence for estimating the quantile is obtained based on asymptotic theory for Bernoulli processes: :<math> n_{\text{min}} = \bigg\{\Phi^{-1}\bigg(1-\dfrac{\alpha}{2}\bigg)\bigg\}^2 \dfrac{q(1-q)}{\varepsilon^2} </math> where <math>\Phi^{-1}(\cdot)</math> is the standard normal quantile function. * The burn-in period <math>n_{\text{burn}}</math> is calculated using eigenvalue analysis of the transition matrix to estimate the number of initial iterations needed for the Markov chain to forget its initial state.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)