Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Gibbs sampling
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Mathematical background == Suppose that a sample <math>\left.X\right.</math> is taken from a distribution depending on a parameter vector <math>\theta \in \Theta \,\!</math> of length <math>\left.d\right.</math>, with prior distribution <math>g(\theta_1, \ldots , \theta_d)</math>. It may be that <math>\left.d\right.</math> is very large and that numerical integration to find the marginal densities of the <math>\left.\theta_i\right.</math> would be computationally expensive. Then an alternative method of calculating the marginal densities is to create a Markov chain on the space <math>\left.\Theta\right.</math> by repeating these two steps: # Pick a random index <math>1 \leq j \leq d</math> # Pick a new value for <math>\left.\theta_j\right.</math> according to <math>g(\theta_1, \ldots , \theta_{j-1} , \, \cdot \, , \theta_{j+1} , \ldots , \theta_d )</math> These steps define a [[Markov chain#Time reversal|reversible Markov chain]] with the desired invariant distribution <math>\left.g\right.</math>. This can be proved as follows. Define <math>x \sim_j y</math> if <math>\left.x_i = y_i\right.</math> for all <math>i \neq j</math> and let <math>\left.p_{xy}\right.</math> denote the probability of a jump from <math>x \in \Theta</math> to <math>y \in \Theta</math>. Then, the transition probabilities are :<math>p_{xy} = \begin{cases} \frac{1}{d}\frac{g(y)}{\sum_{z \in \Theta: z \sim_j x} g(z) } & x \sim_j y \\ 0 & \text{otherwise} \end{cases} </math> So :<math> g(x) p_{xy} = \frac{1}{d}\frac{ g(x) g(y)}{\sum_{z \in \Theta: z \sim_j x} g(z) } = \frac{1}{d}\frac{ g(y) g(x)}{\sum_{z \in \Theta: z \sim_j y} g(z) } = g(y) p_{yx} </math> since <math>x \sim_j y</math> is an [[equivalence relation]]. Thus the [[detailed balance equations]] are satisfied, implying the chain is reversible and it has invariant distribution <math>\left.g\right.</math>. In practice, the index <math>\left.j\right.</math> is not chosen at random, and the chain cycles through the indexes in order. In general this gives a non-stationary Markov process, but each individual step will still be reversible, and the overall process will still have the desired stationary distribution (as long as the chain can access all states under the fixed ordering).
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)