Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Metropolis–Hastings algorithm
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Description== The Metropolis–Hastings algorithm can draw samples from any [[probability distribution]] with [[probability density]] <math>P(x)</math>, provided that we know a function <math>f(x)</math> proportional to the [[Probability density function|density]] <math>P</math> and the values of <math>f(x)</math> can be calculated. The requirement that <math>f(x)</math> must only be proportional to the density, rather than exactly equal to it, makes the Metropolis–Hastings algorithm particularly useful, because it removes the need to calculate the density's normalization factor, which is often extremely difficult in practice. The Metropolis–Hastings algorithm generates a sequence of sample values in such a way that, as more and more sample values are produced, the distribution of values more closely approximates the desired distribution. These sample values are produced iteratively in such a way, that the distribution of the next sample depends only on the current sample value, which makes the sequence of samples a [[Markov chain]]. Specifically, at each iteration, the algorithm proposes a candidate for the next sample value based on the current sample value. Then, with some probability, the candidate is either accepted, in which case the candidate value is used in the next iteration, or it is rejected in which case the candidate value is discarded, and the current value is reused in the next iteration. The probability of acceptance is determined by comparing the values of the function <math>f(x)</math> of the current and candidate sample values with respect to the desired distribution. The method used to propose new candidates is characterized by the probability distribution <math>g(x\mid y)</math> (sometimes written <math>Q(x\mid y)</math>) of a new proposed sample <math>x</math> given the previous sample <math>y</math>. This is called the ''proposal density'', ''proposal function'', or ''jumping distribution''. A common choice for <math>g(x\mid y)</math> is a [[Gaussian distribution]] centered at <math>y</math>, so that points closer to <math>y</math> are more likely to be visited next, making the sequence of samples into a [[Gaussian random walk]]. In the original paper by Metropolis et al. (1953), <math>g(x\mid y)</math> was suggested to be a uniform distribution limited to some maximum distance from <math>y</math>. More complicated proposal functions are also possible, such as those of [[Hamiltonian Monte Carlo]], [[Langevin Monte Carlo]], or [[preconditioned Crank–Nicolson]]. For the purpose of illustration, the Metropolis algorithm, a special case of the Metropolis–Hastings algorithm where the proposal function is symmetric, is described below. <!---The sample values are linked in a [[Markov chain]], which means that the probability of each sample is conditionally independent of any earlier sample, given the sample immediately before it. In other words, general idea is to generate a sequence of samples which are linked in a [[Markov chain]]; in other words, where each sample in the sequence is conditionally independent of any earlier sample, given the sample immediately before it. The procedure for choosing successive samples guarantees that the distribution of sample values will match the desired distribution after a long time.!--> ;Metropolis algorithm (symmetric proposal distribution): Let <math>f(x)</math> be a function that is proportional to the desired probability density function <math>P(x)</math> (a.k.a. a target distribution){{efn|In the original paper by Metropolis et al. (1953), <math>f</math> was taken to be the [[Boltzmann distribution]] as the specific application considered was [[Monte Carlo integration]] of [[equation of state|equations of state]] in [[physical chemistry]]; the extension by Hastings generalized to an arbitrary distribution <math>f</math>.}}. # Initialization: Choose an arbitrary point <math>x_t</math> to be the first observation in the sample and choose a proposal function <math>g(x\mid y)</math>. In this section, <math>g</math> is assumed to be symmetric; in other words, it must satisfy <math>g(x\mid y) = g(y\mid x)</math>. # For each iteration ''t'': #* ''Propose'' a candidate <math>x'</math> for the next sample by picking from the distribution <math>g(x'\mid x_t)</math>. #* ''Calculate'' the ''acceptance ratio'' <math>\alpha = f(x')/f(x_t)</math>, which will be used to decide whether to accept or reject the candidate{{efn|In the original paper by Metropolis et al. (1953), <math>f</math> was actually the [[Boltzmann distribution]], as it was applied to physical systems in the context of [[statistical mechanics]] (e.g., a maximal-entropy distribution of microstates for a given temperature at thermal equilibrium). Consequently, the acceptance ratio was itself an exponential of the difference in the parameters of the numerator and denominator of this ratio.}}. Because ''f'' is proportional to the density of ''P'', we have that <math>\alpha = f(x')/f(x_t) = P(x')/P(x_t)</math>. #* ''Accept or reject'': #** Generate a uniform random number <math>u \in [0, 1]</math>. #** If <math>u \le \alpha</math>, then ''accept'' the candidate by setting <math>x_{t+1} = x'</math>, #** If <math>u > \alpha</math>, then ''reject'' the candidate and set <math>x_{t+1} = x_t</math> instead. This algorithm proceeds by randomly attempting to move about the sample space, sometimes accepting the moves and sometimes remaining in place. <math>P(x)</math> at specific point <math>x</math> is proportional to the iterations spent on the point by the algorithm. Note that the acceptance ratio <math>\alpha</math> indicates how probable the new proposed sample is with respect to the current sample, according to the distribution whose density is <math>P(x)</math>. If we attempt to move to a point that is more probable than the existing point (i.e. a point in a higher-density region of <math>P(x)</math> corresponding to an <math>\alpha > 1 \ge u</math>), we will always accept the move. However, if we attempt to move to a less probable point, we will sometimes reject the move, and the larger the relative drop in probability, the more likely we are to reject the new point. Thus, we will tend to stay in (and return large numbers of samples from) high-density regions of <math>P(x)</math>, while only occasionally visiting low-density regions. Intuitively, this is why this algorithm works and returns samples that follow the desired distribution with density <math>P(x)</math>. Compared with an algorithm like [[adaptive rejection sampling]]<ref name=":0">{{Cite journal |last1=Gilks |first1=W. R. |last2=Wild |first2=P. |date=1992-01-01 |title=Adaptive Rejection Sampling for Gibbs Sampling |journal=Journal of the Royal Statistical Society. Series C (Applied Statistics) |volume=41 |issue=2 |pages=337–348 |doi=10.2307/2347565 |jstor=2347565}}</ref> that directly generates independent samples from a distribution, Metropolis–Hastings and other MCMC algorithms have a number of disadvantages: * The samples are [[autocorrelation|autocorrelated]]. Even though over the long term they do correctly follow <math>P(x)</math>, a set of nearby samples will be correlated with each other and not correctly reflect the distribution. This means that effective sample sizes can be significantly lower than the number of samples actually taken, leading to large errors. * Although the Markov chain eventually converges to the desired distribution, the initial samples may follow a very different distribution, especially if the starting point is in a region of low density. As a result, a ''burn-in'' period is typically necessary,<ref>{{Cite book |title=Bayesian data analysis |date=2004 |publisher=Chapman & Hall / CRC |others=Gelman, Andrew |isbn=978-1584883883 |edition=2nd |location=Boca Raton, Fla. |oclc=51991499}}</ref> where an initial number of samples are thrown away. On the other hand, most simple [[rejection sampling]] methods suffer from the "[[curse of dimensionality]]", where the probability of rejection increases exponentially as a function of the number of dimensions. Metropolis–Hastings, along with other MCMC methods, do not have this problem to such a degree, and thus are often the only solutions available when the number of dimensions of the distribution to be sampled is high. As a result, MCMC methods are often the methods of choice for producing samples from [[hierarchical Bayesian model]]s and other high-dimensional statistical models used nowadays in many disciplines. In [[multivariate distribution|multivariate]] distributions, the classic Metropolis–Hastings algorithm as described above involves choosing a new multi-dimensional sample point. When the number of dimensions is high, finding the suitable jumping distribution to use can be difficult, as the different individual dimensions behave in very different ways, and the jumping width (see above) must be "just right" for all dimensions at once to avoid excessively slow mixing. An alternative approach that often works better in such situations, known as [[Gibbs sampling]], involves choosing a new sample for each dimension separately from the others, rather than choosing a sample for all dimensions at once. That way, the problem of sampling from potentially high-dimensional space will be reduced to a collection of problems to sample from small dimensionality.<ref>{{Cite journal |last=Lee |first=Se Yoon |year=2021 |title=Gibbs sampler and coordinate ascent variational inference: A set-theoretical review |journal=Communications in Statistics - Theory and Methods |volume=51 |issue=6 |pages=1549–1568 |arxiv=2008.01006 |doi=10.1080/03610926.2021.1921214 |s2cid=220935477}}</ref> This is especially applicable when the multivariate distribution is composed of a set of individual [[random variable]]s in which each variable is conditioned on only a small number of other variables, as is the case in most typical [[hierarchical Bayesian model|hierarchical models]]. The individual variables are then sampled one at a time, with each variable conditioned on the most recent values of all the others. Various algorithms can be used to choose these individual samples, depending on the exact form of the multivariate distribution: some possibilities are the [[adaptive rejection sampling]] methods,<ref name=":0" /> the adaptive rejection Metropolis sampling algorithm,<ref>{{Cite journal |last1=Gilks |first1=W. R. |last2=Best |first2=N. G. |author-link2=Nicky Best |last3=Tan |first3=K. K. C. |date=1995-01-01 |title=Adaptive Rejection Metropolis Sampling within Gibbs Sampling |journal=Journal of the Royal Statistical Society. Series C (Applied Statistics) |volume=44 |issue=4 |pages=455–472 |doi=10.2307/2986138 |jstor=2986138}}</ref> a simple one-dimensional Metropolis–Hastings step, or [[slice sampling]].
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)