Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Gibbs sampling
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Introduction == Gibbs sampling is named after the physicist [[Josiah Willard Gibbs]], in reference to an analogy between the [[Sampling (statistics)|sampling]] algorithm and [[statistical physics]]. The algorithm was described by brothers [[Stuart Geman|Stuart]] and [[Donald Geman]] in 1984, some eight decades after the death of Gibbs,<ref>{{Cite journal | first1=S. |last1=Geman |author-link1=Stuart Geman | first2=D. |last2=Geman |author-link2=Donald Geman | title = Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images | journal = [[IEEE Transactions on Pattern Analysis and Machine Intelligence]] | volume = 6 |issue=6 | pages = 721–741 | year = 1984 | doi = 10.1109/TPAMI.1984.4767596 | pmid=22499653 }}</ref> and became popularized in the statistics community for calculating marginal probability distribution, especially the posterior distribution.<ref>{{Cite journal |last1=Gelfand |first1=Alan E. |last2=Smith |first2=Adrian F. M. |date=1990-06-01 |title=Sampling-Based Approaches to Calculating Marginal Densities |url=https://www.tandfonline.com/doi/abs/10.1080/01621459.1990.10476213 |journal=Journal of the American Statistical Association |volume=85 |issue=410 |pages=398–409 |doi=10.1080/01621459.1990.10476213 |issn=0162-1459}}</ref> In its basic version, Gibbs sampling is a special case of the [[Metropolis–Hastings algorithm]]. However, in its extended versions (see [[#Variations and extensions|below]]), it can be considered a general framework for sampling from a large set of variables by sampling each variable (or in some cases, each group of variables) in turn, and can incorporate the [[Metropolis–Hastings algorithm]] (or methods such as [[slice sampling]]) to implement one or more of the sampling steps. Gibbs sampling is applicable when the joint distribution is not known explicitly or is difficult to sample from directly, but the [[conditional distribution]] of each variable is known and is easy (or at least, easier) to sample from. The Gibbs sampling algorithm generates an instance from the distribution of each variable in turn, conditional on the current values of the other variables. It can be shown that the sequence of samples constitutes a [[Markov chain]], and the stationary distribution of that Markov chain is just the sought-after joint distribution.<ref>{{Cite book|title=Bayesian data analysis|last=Gelman|first=Andrew and Carlin, John B and Stern, Hal S and Dunson, David B and Vehtari, Aki and Rubin, Donald B|publisher=CRC press Boca Raton|year=2014|volume=2|location=FL}}</ref> Gibbs sampling is particularly well-adapted to sampling the [[posterior probability|posterior distribution]] of a [[Bayesian network]], since Bayesian networks are typically specified as a collection of conditional distributions.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)