Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Markov chain
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Finite state space=== If the state space is [[finite set|finite]], the transition probability distribution can be represented by a [[matrix (mathematics)|matrix]], called the transition matrix, with the (''i'', ''j'')th [[element (mathematics)|element]] of '''P''' equal to :<math>p_{ij} = \Pr(X_{n+1}=j\mid X_n=i). </math> Since each row of '''P''' sums to one and all elements are non-negative, '''P''' is a [[right stochastic matrix]]. ====Stationary distribution relation to eigenvectors and simplices==== A stationary distribution {{pi}} is a (row) vector, whose entries are non-negative and sum to 1, is unchanged by the operation of transition matrix '''P''' on it and so is defined by :<math> \pi\mathbf{P} = \pi.</math> By comparing this definition with that of an [[eigenvector]] we see that the two concepts are related and that :<math>\pi=\frac{e}{\sum_i{e_i}}</math> is a normalized (<math display="inline">\sum_i \pi_i=1</math>) multiple of a left eigenvector '''e''' of the transition matrix '''P''' with an [[eigenvalue]] of 1. If there is more than one unit eigenvector then a weighted sum of the corresponding stationary states is also a stationary state. But for a Markov chain one is usually more interested in a stationary state that is the limit of the sequence of distributions for some initial distribution. The values of a stationary distribution <math> \textstyle \pi_i </math> are associated with the state space of '''P''' and its eigenvectors have their relative proportions preserved. Since the components of π are positive and the constraint that their sum is unity can be rewritten as <math display="inline">\sum_i 1 \cdot \pi_i=1</math> we see that the [[dot product]] of π with a vector whose components are all 1 is unity and that π lies on a [[standard simplex|simplex]]. ====Time-homogeneous Markov chain with a finite state space==== If the Markov chain is time-homogeneous, then the transition matrix '''P''' is the same after each step, so the ''k''-step transition probability can be computed as the ''k''-th power of the transition matrix, '''P'''<sup>''k''</sup>. If the Markov chain is irreducible and aperiodic, then there is a unique stationary distribution {{pi}}.<ref name="auto">{{Cite book |last=Serfozo |first=Richard |date=2009 |title=Basics of Applied Stochastic Processes |series=Probability and Its Applications |doi=10.1007/978-3-540-89332-5 |isbn=978-3-540-89331-8 |place=Berlin |publisher=Springer }}</ref> Additionally, in this case '''P'''<sup>''k''</sup> converges to a rank-one matrix in which each row is the stationary distribution {{pi}}: :<math>\lim_{k\to\infty}\mathbf{P}^k=\mathbf{1}\pi</math> where '''1''' is the column vector with all entries equal to 1. This is stated by the [[Perron–Frobenius theorem]]. If, by whatever means, <math display="inline">\lim_{k\to\infty}\mathbf{P}^k</math> is found, then the stationary distribution of the Markov chain in question can be easily determined for any starting distribution, as will be explained below. For some stochastic matrices '''P''', the limit <math display="inline">\lim_{k\to\infty}\mathbf{P}^k</math> does not exist while the stationary distribution does, as shown by this example: :<math>\mathbf P=\begin{pmatrix} 0& 1\\ 1& 0 \end{pmatrix} \qquad \mathbf P^{2k}=I \qquad \mathbf P^{2k+1}=\mathbf P</math> :<math>\begin{pmatrix}\frac{1}{2}&\frac{1}{2}\end{pmatrix}\begin{pmatrix} 0& 1\\ 1& 0 \end{pmatrix}=\begin{pmatrix}\frac{1}{2}&\frac{1}{2}\end{pmatrix}</math> (This example illustrates a periodic Markov chain.) Because there are a number of different special cases to consider, the process of finding this limit if it exists can be a lengthy task. However, there are many techniques that can assist in finding this limit. Let '''P''' be an ''n''×''n'' matrix, and define <math display="inline">\mathbf{Q} = \lim_{k\to\infty}\mathbf{P}^k.</math> It is always true that :<math>\mathbf{QP} = \mathbf{Q}.</math> Subtracting '''Q''' from both sides and factoring then yields :<math>\mathbf{Q}(\mathbf{P} - \mathbf{I}_{n}) = \mathbf{0}_{n,n} ,</math> where '''I'''<sub>''n''</sub> is the [[identity matrix]] of size ''n'', and '''0'''<sub>''n'',''n''</sub> is the [[zero matrix]] of size ''n''×''n''. Multiplying together stochastic matrices always yields another stochastic matrix, so '''Q''' must be a [[stochastic matrix]] (see the definition above). It is sometimes sufficient to use the matrix equation above and the fact that '''Q''' is a stochastic matrix to solve for '''Q'''. Including the fact that the sum of each the rows in '''P''' is 1, there are ''n+1'' equations for determining ''n'' unknowns, so it is computationally easier if on the one hand one selects one row in '''Q''' and substitutes each of its elements by one, and on the other one substitutes the corresponding element (the one in the same column) in the vector '''0''', and next left-multiplies this latter vector by the inverse of transformed former matrix to find '''Q'''. Here is one method for doing so: first, define the function ''f''('''A''') to return the matrix '''A''' with its right-most column replaced with all 1's. If [''f''('''P''' − '''I'''<sub>''n''</sub>)]<sup>−1</sup> exists then<ref>{{cite web|url=https://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/Chapter11.pdf|title=Chapter 11 "Markov Chains"|access-date=2017-06-02}}</ref><ref name="auto"/> :<math>\mathbf{Q}=f(\mathbf{0}_{n,n})[f(\mathbf{P}-\mathbf{I}_n)]^{-1}.</math> :Explain: The original matrix equation is equivalent to a [[system of linear equations|system of n×n linear equations]] in n×n variables. And there are n more linear equations from the fact that Q is a right [[stochastic matrix]] whose each row sums to 1. So it needs any n×n independent linear equations of the (n×n+n) equations to solve for the n×n variables. In this example, the n equations from "Q multiplied by the right-most column of (P-In)" have been replaced by the n stochastic ones. One thing to notice is that if '''P''' has an element '''P'''<sub>''i'',''i''</sub> on its main diagonal that is equal to 1 and the ''i''th row or column is otherwise filled with 0's, then that row or column will remain unchanged in all of the subsequent powers '''P'''<sup>''k''</sup>. Hence, the ''i''th row or column of '''Q''' will have the 1 and the 0's in the same positions as in '''P'''. ====Convergence speed to the stationary distribution==== As stated earlier, from the equation <math>\boldsymbol{\pi} = \boldsymbol{\pi} \mathbf{P},</math> (if exists) the stationary (or steady state) distribution '''{{pi}}''' is a left eigenvector of row [[stochastic matrix]] '''P'''. Then assuming that '''P''' is diagonalizable or equivalently that '''P''' has ''n'' linearly independent eigenvectors, speed of convergence is elaborated as follows. (For non-diagonalizable, that is, [[defective matrix|defective matrices]], one may start with the [[Jordan normal form]] of '''P''' and proceed with a bit more involved set of arguments in a similar way.<ref>{{cite journal |last1=Schmitt |first1=Florian |last2=Rothlauf |first2=Franz |title=On the Importance of the Second Largest Eigenvalue on the Convergence Rate of Genetic Algorithms |journal=Proceedings of the 14th Symposium on Reliable Distributed Systems |date=2001 |citeseerx=10.1.1.28.6191 }}</ref>) Let '''U''' be the matrix of eigenvectors (each normalized to having an L2 norm equal to 1) where each column is a left eigenvector of '''P''' and let '''Σ''' be the diagonal matrix of left eigenvalues of '''P''', that is, '''Σ''' = diag(''λ''<sub>1</sub>,''λ''<sub>2</sub>,''λ''<sub>3</sub>,...,''λ''<sub>''n''</sub>). Then by [[eigendecomposition]] :<math> \mathbf{P} = \mathbf{U\Sigma U}^{-1} .</math> Let the eigenvalues be enumerated such that: :<math> 1 = |\lambda_1 |> |\lambda_2 | \geq |\lambda_3 | \geq \cdots \geq |\lambda_n|.</math> Since '''P''' is a row stochastic matrix, its largest left eigenvalue is 1. If there is a unique stationary distribution, then the largest eigenvalue and the corresponding eigenvector is unique too (because there is no other '''{{pi}}''' which solves the stationary distribution equation above). Let '''u'''<sub>''i''</sub> be the ''i''-th column of '''U''' matrix, that is, '''u'''<sub>''i''</sub> is the left eigenvector of '''P''' corresponding to λ<sub>''i''</sub>. Also let '''x''' be a length ''n'' row vector that represents a valid probability distribution; since the eigenvectors '''u'''<sub>''i''</sub> span <math>\R^n,</math> we can write :<math> \mathbf{x}^\mathsf{T} = \sum_{i=1}^n a_i \mathbf{u}_i, \qquad a_i \in \R.</math> If we multiply '''x''' with '''P''' from right and continue this operation with the results, in the end we get the stationary distribution '''{{pi}}'''. In other words, '''{{pi}}''' = '''a'''<sub>1</sub> '''u'''<sub>1</sub> ← '''xPP'''...'''P''' = '''xP'''<sup>''k''</sup> as ''k'' → ∞. That means :<math>\begin{align} \boldsymbol{\pi}^{(k)} &= \mathbf{x} \left (\mathbf{U\Sigma U}^{-1} \right ) \left (\mathbf{U\Sigma U}^{-1} \right )\cdots \left (\mathbf{U\Sigma U}^{-1} \right ) \\ &= \mathbf{xU\Sigma}^k \mathbf{U}^{-1} \\ &= \left (a_1\mathbf{u}_1^\mathsf{T} + a_2\mathbf{u}_2^\mathsf{T} + \cdots + a_n\mathbf{u}_n^\mathsf{T} \right )\mathbf{U\Sigma}^k\mathbf{U}^{-1} \\ &= a_1\lambda_1^k\mathbf{u}_1^\mathsf{T} + a_2\lambda_2^k\mathbf{u}_2^\mathsf{T} + \cdots + a_n\lambda_n^k\mathbf{u}_n^\mathsf{T} && u_i \bot u_j \text{ for } i\neq j \\ & = \lambda_1^k\left\{a_1\mathbf{u}_1^\mathsf{T} + a_2\left(\frac{\lambda_2}{\lambda_1}\right)^k\mathbf{u}_2^\mathsf{T} + a_3\left(\frac{\lambda_3}{\lambda_1}\right)^k\mathbf{u}_3^\mathsf{T} + \cdots + a_n\left(\frac{\lambda_n}{\lambda_1}\right)^k\mathbf{u}_n^\mathsf{T}\right\} \end{align}</math> Since '''{{pi}}''' is parallel to '''u'''<sub>1</sub>(normalized by L2 norm) and '''{{pi}}'''<sup>(''k'')</sup> is a probability vector, '''{{pi}}'''<sup>(''k'')</sup> approaches to '''a'''<sub>1</sub> '''u'''<sub>1</sub> = '''{{pi}}''' as ''k'' → ∞ with a speed in the order of ''λ''<sub>2</sub>/''λ''<sub>1</sub> exponentially. This follows because <math> |\lambda_2| \geq \cdots \geq |\lambda_n|,</math> hence ''λ''<sub>2</sub>/''λ''<sub>1</sub> is the dominant term. The smaller the ratio is, the faster the convergence is.<ref>{{Cite journal | volume = 37 | issue = 3| pages = 387–405| last = Rosenthal| first = Jeffrey S.| title = Convergence Rates for Markov Chains| journal = SIAM Review| accessdate = 2021-05-31| date = 1995| doi = 10.1137/1037083| url = https://www.jstor.org/stable/2132659| jstor = 2132659}}</ref> Random noise in the state distribution '''{{pi}}''' can also speed up this convergence to the stationary distribution.<ref>{{cite journal|last=Franzke|first=Brandon|author2=Kosko, Bart|date=1 October 2011|title=Noise can speed convergence in Markov chains|journal=Physical Review E|volume=84|issue=4|pages=041112|bibcode=2011PhRvE..84d1112F|doi=10.1103/PhysRevE.84.041112|pmid=22181092}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)