Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Markov chain
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Formal definition== ===Discrete-time Markov chain=== {{Main|Discrete-time Markov chain}} A discrete-time Markov chain is a sequence of [[random variable]]s ''X''<sub>1</sub>, ''X''<sub>2</sub>, ''X''<sub>3</sub>, ... with the [[Markov property]], namely that the probability of moving to the next state depends only on the present state and not on the previous states: :<math>\Pr(X_{n+1}=x\mid X_1=x_1, X_2=x_2, \ldots, X_n=x_n) = \Pr(X_{n+1}=x\mid X_n=x_n),</math> if both [[conditional probability|conditional probabilities]] are well defined, that is, if <math>\Pr(X_1=x_1,\ldots,X_n=x_n)>0.</math> The possible values of ''X''<sub>''i''</sub> form a [[countable set]] ''S'' called the state space of the chain. ====Variations==== *{{Anchor|homogeneous}}Time-homogeneous Markov chains are processes where <math display="block">\Pr(X_{n+1}=x\mid X_n=y) = \Pr(X_n = x \mid X_{n-1} = y)</math> for all ''n''. The probability of the transition is independent of ''n''. *Stationary Markov chains are processes where <math display="block">\Pr(X_{0}=x_0, X_{1} = x_1, \ldots, X_{k} = x_k) = \Pr(X_{n}=x_0, X_{n+1} = x_1, \ldots, X_{n+k} = x_k)</math> for all ''n'' and ''k''. Every stationary chain can be proved to be time-homogeneous by Bayes' rule.{{pb}}A necessary and sufficient condition for a time-homogeneous Markov chain to be stationary is that the distribution of <math>X_0</math> is a stationary distribution of the Markov chain. *A Markov chain with memory (or a Markov chain of order ''m'') where ''m'' is finite, is a process satisfying <math display="block"> \begin{align} {} &\Pr(X_n=x_n\mid X_{n-1}=x_{n-1}, X_{n-2}=x_{n-2}, \dots , X_1=x_1) \\ = &\Pr(X_n=x_n\mid X_{n-1}=x_{n-1}, X_{n-2}=x_{n-2}, \dots, X_{n-m}=x_{n-m}) \text{ for }n > m \end{align} </math> In other words, the future state depends on the past ''m'' states. It is possible to construct a chain <math>(Y_n)</math> from <math>(X_n)</math> which has the 'classical' Markov property by taking as state space the ordered ''m''-tuples of ''X'' values, i.e., <math>Y_n= \left( X_n,X_{n-1},\ldots,X_{n-m+1} \right)</math>. ===Continuous-time Markov chain=== {{Main|Continuous-time Markov chain}} A continuous-time Markov chain (''X''<sub>''t''</sub>)<sub>''t'' ≥ 0</sub> is defined by a finite or countable state space ''S'', a [[transition rate matrix]] ''Q'' with dimensions equal to that of the state space and initial probability distribution defined on the state space. For ''i'' ≠ ''j'', the elements ''q''<sub>''ij''</sub> are non-negative and describe the rate of the process transitions from state ''i'' to state ''j''. The elements ''q''<sub>''ii''</sub> are chosen such that each row of the transition rate matrix sums to zero, while the row-sums of a probability transition matrix in a (discrete) Markov chain are all equal to one. There are three equivalent definitions of the process.<ref name="norris1">{{cite book|title=Markov Chains|year=1997|isbn=9780511810633|pages=60–107|chapter=Continuous-time Markov chains I|doi=10.1017/CBO9780511810633.004|last1=Norris|first1=J. R.|author-link1=James R. Norris}}</ref> ====Infinitesimal definition==== [[File:Intensities_vs_transition_probabilities.svg|thumb|The continuous time Markov chain is characterized by the transition rates, the derivatives with respect to time of the transition probabilities between states i and j.]] Let <math>X_t</math> be the random variable describing the state of the process at time ''t'', and assume the process is in a state ''i'' at time ''t''. Then, knowing <math>X_t = i</math>, <math>X_{t+h}=j</math> is independent of previous values <math>\left( X_s : s < t \right)</math>, and as ''h'' → 0 for all ''j'' and for all ''t'', <math display="block">\Pr(X(t+h) = j \mid X(t) = i) = \delta_{ij} + q_{ij}h + o(h),</math> where <math>\delta_{ij}</math> is the [[Kronecker delta]], using the [[little-o notation]]. The <math>q_{ij}</math> can be seen as measuring how quickly the transition from ''i'' to ''j'' happens. ====Jump chain/holding time definition==== Define a discrete-time Markov chain ''Y''<sub>''n''</sub> to describe the ''n''th jump of the process and variables ''S''<sub>1</sub>, ''S''<sub>2</sub>, ''S''<sub>3</sub>, ... to describe holding times in each of the states where ''S''<sub>''i''</sub> follows the [[exponential distribution]] with rate parameter −''q''<sub>''Y''<sub>''i''</sub>''Y''<sub>''i''</sub></sub>. ====Transition probability definition==== For any value ''n'' = 0, 1, 2, 3, ... and times indexed up to this value of ''n'': ''t''<sub>0</sub>, ''t''<sub>1</sub>, ''t''<sub>2</sub>, ... and all states recorded at these times ''i''<sub>0</sub>, ''i''<sub>1</sub>, ''i''<sub>2</sub>, ''i''<sub>3</sub>, ... it holds that :<math>\Pr(X_{t_{n+1}} = i_{n+1} \mid X_{t_0} = i_0 , X_{t_1} = i_1 , \ldots, X_{t_n} = i_n ) = p_{i_n i_{n+1}}( t_{n+1} - t_n)</math> where ''p''<sub>''ij''</sub> is the solution of the [[forward equation]] (a [[first-order differential equation]]) :<math>P'(t) = P(t) Q</math> with initial condition P(0) is the [[identity matrix]]. ===Finite state space=== If the state space is [[finite set|finite]], the transition probability distribution can be represented by a [[matrix (mathematics)|matrix]], called the transition matrix, with the (''i'', ''j'')th [[element (mathematics)|element]] of '''P''' equal to :<math>p_{ij} = \Pr(X_{n+1}=j\mid X_n=i). </math> Since each row of '''P''' sums to one and all elements are non-negative, '''P''' is a [[right stochastic matrix]]. ====Stationary distribution relation to eigenvectors and simplices==== A stationary distribution {{pi}} is a (row) vector, whose entries are non-negative and sum to 1, is unchanged by the operation of transition matrix '''P''' on it and so is defined by :<math> \pi\mathbf{P} = \pi.</math> By comparing this definition with that of an [[eigenvector]] we see that the two concepts are related and that :<math>\pi=\frac{e}{\sum_i{e_i}}</math> is a normalized (<math display="inline">\sum_i \pi_i=1</math>) multiple of a left eigenvector '''e''' of the transition matrix '''P''' with an [[eigenvalue]] of 1. If there is more than one unit eigenvector then a weighted sum of the corresponding stationary states is also a stationary state. But for a Markov chain one is usually more interested in a stationary state that is the limit of the sequence of distributions for some initial distribution. The values of a stationary distribution <math> \textstyle \pi_i </math> are associated with the state space of '''P''' and its eigenvectors have their relative proportions preserved. Since the components of π are positive and the constraint that their sum is unity can be rewritten as <math display="inline">\sum_i 1 \cdot \pi_i=1</math> we see that the [[dot product]] of π with a vector whose components are all 1 is unity and that π lies on a [[standard simplex|simplex]]. ====Time-homogeneous Markov chain with a finite state space==== If the Markov chain is time-homogeneous, then the transition matrix '''P''' is the same after each step, so the ''k''-step transition probability can be computed as the ''k''-th power of the transition matrix, '''P'''<sup>''k''</sup>. If the Markov chain is irreducible and aperiodic, then there is a unique stationary distribution {{pi}}.<ref name="auto">{{Cite book |last=Serfozo |first=Richard |date=2009 |title=Basics of Applied Stochastic Processes |series=Probability and Its Applications |doi=10.1007/978-3-540-89332-5 |isbn=978-3-540-89331-8 |place=Berlin |publisher=Springer }}</ref> Additionally, in this case '''P'''<sup>''k''</sup> converges to a rank-one matrix in which each row is the stationary distribution {{pi}}: :<math>\lim_{k\to\infty}\mathbf{P}^k=\mathbf{1}\pi</math> where '''1''' is the column vector with all entries equal to 1. This is stated by the [[Perron–Frobenius theorem]]. If, by whatever means, <math display="inline">\lim_{k\to\infty}\mathbf{P}^k</math> is found, then the stationary distribution of the Markov chain in question can be easily determined for any starting distribution, as will be explained below. For some stochastic matrices '''P''', the limit <math display="inline">\lim_{k\to\infty}\mathbf{P}^k</math> does not exist while the stationary distribution does, as shown by this example: :<math>\mathbf P=\begin{pmatrix} 0& 1\\ 1& 0 \end{pmatrix} \qquad \mathbf P^{2k}=I \qquad \mathbf P^{2k+1}=\mathbf P</math> :<math>\begin{pmatrix}\frac{1}{2}&\frac{1}{2}\end{pmatrix}\begin{pmatrix} 0& 1\\ 1& 0 \end{pmatrix}=\begin{pmatrix}\frac{1}{2}&\frac{1}{2}\end{pmatrix}</math> (This example illustrates a periodic Markov chain.) Because there are a number of different special cases to consider, the process of finding this limit if it exists can be a lengthy task. However, there are many techniques that can assist in finding this limit. Let '''P''' be an ''n''×''n'' matrix, and define <math display="inline">\mathbf{Q} = \lim_{k\to\infty}\mathbf{P}^k.</math> It is always true that :<math>\mathbf{QP} = \mathbf{Q}.</math> Subtracting '''Q''' from both sides and factoring then yields :<math>\mathbf{Q}(\mathbf{P} - \mathbf{I}_{n}) = \mathbf{0}_{n,n} ,</math> where '''I'''<sub>''n''</sub> is the [[identity matrix]] of size ''n'', and '''0'''<sub>''n'',''n''</sub> is the [[zero matrix]] of size ''n''×''n''. Multiplying together stochastic matrices always yields another stochastic matrix, so '''Q''' must be a [[stochastic matrix]] (see the definition above). It is sometimes sufficient to use the matrix equation above and the fact that '''Q''' is a stochastic matrix to solve for '''Q'''. Including the fact that the sum of each the rows in '''P''' is 1, there are ''n+1'' equations for determining ''n'' unknowns, so it is computationally easier if on the one hand one selects one row in '''Q''' and substitutes each of its elements by one, and on the other one substitutes the corresponding element (the one in the same column) in the vector '''0''', and next left-multiplies this latter vector by the inverse of transformed former matrix to find '''Q'''. Here is one method for doing so: first, define the function ''f''('''A''') to return the matrix '''A''' with its right-most column replaced with all 1's. If [''f''('''P''' − '''I'''<sub>''n''</sub>)]<sup>−1</sup> exists then<ref>{{cite web|url=https://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/Chapter11.pdf|title=Chapter 11 "Markov Chains"|access-date=2017-06-02}}</ref><ref name="auto"/> :<math>\mathbf{Q}=f(\mathbf{0}_{n,n})[f(\mathbf{P}-\mathbf{I}_n)]^{-1}.</math> :Explain: The original matrix equation is equivalent to a [[system of linear equations|system of n×n linear equations]] in n×n variables. And there are n more linear equations from the fact that Q is a right [[stochastic matrix]] whose each row sums to 1. So it needs any n×n independent linear equations of the (n×n+n) equations to solve for the n×n variables. In this example, the n equations from "Q multiplied by the right-most column of (P-In)" have been replaced by the n stochastic ones. One thing to notice is that if '''P''' has an element '''P'''<sub>''i'',''i''</sub> on its main diagonal that is equal to 1 and the ''i''th row or column is otherwise filled with 0's, then that row or column will remain unchanged in all of the subsequent powers '''P'''<sup>''k''</sup>. Hence, the ''i''th row or column of '''Q''' will have the 1 and the 0's in the same positions as in '''P'''. ====Convergence speed to the stationary distribution==== As stated earlier, from the equation <math>\boldsymbol{\pi} = \boldsymbol{\pi} \mathbf{P},</math> (if exists) the stationary (or steady state) distribution '''{{pi}}''' is a left eigenvector of row [[stochastic matrix]] '''P'''. Then assuming that '''P''' is diagonalizable or equivalently that '''P''' has ''n'' linearly independent eigenvectors, speed of convergence is elaborated as follows. (For non-diagonalizable, that is, [[defective matrix|defective matrices]], one may start with the [[Jordan normal form]] of '''P''' and proceed with a bit more involved set of arguments in a similar way.<ref>{{cite journal |last1=Schmitt |first1=Florian |last2=Rothlauf |first2=Franz |title=On the Importance of the Second Largest Eigenvalue on the Convergence Rate of Genetic Algorithms |journal=Proceedings of the 14th Symposium on Reliable Distributed Systems |date=2001 |citeseerx=10.1.1.28.6191 }}</ref>) Let '''U''' be the matrix of eigenvectors (each normalized to having an L2 norm equal to 1) where each column is a left eigenvector of '''P''' and let '''Σ''' be the diagonal matrix of left eigenvalues of '''P''', that is, '''Σ''' = diag(''λ''<sub>1</sub>,''λ''<sub>2</sub>,''λ''<sub>3</sub>,...,''λ''<sub>''n''</sub>). Then by [[eigendecomposition]] :<math> \mathbf{P} = \mathbf{U\Sigma U}^{-1} .</math> Let the eigenvalues be enumerated such that: :<math> 1 = |\lambda_1 |> |\lambda_2 | \geq |\lambda_3 | \geq \cdots \geq |\lambda_n|.</math> Since '''P''' is a row stochastic matrix, its largest left eigenvalue is 1. If there is a unique stationary distribution, then the largest eigenvalue and the corresponding eigenvector is unique too (because there is no other '''{{pi}}''' which solves the stationary distribution equation above). Let '''u'''<sub>''i''</sub> be the ''i''-th column of '''U''' matrix, that is, '''u'''<sub>''i''</sub> is the left eigenvector of '''P''' corresponding to λ<sub>''i''</sub>. Also let '''x''' be a length ''n'' row vector that represents a valid probability distribution; since the eigenvectors '''u'''<sub>''i''</sub> span <math>\R^n,</math> we can write :<math> \mathbf{x}^\mathsf{T} = \sum_{i=1}^n a_i \mathbf{u}_i, \qquad a_i \in \R.</math> If we multiply '''x''' with '''P''' from right and continue this operation with the results, in the end we get the stationary distribution '''{{pi}}'''. In other words, '''{{pi}}''' = '''a'''<sub>1</sub> '''u'''<sub>1</sub> ← '''xPP'''...'''P''' = '''xP'''<sup>''k''</sup> as ''k'' → ∞. That means :<math>\begin{align} \boldsymbol{\pi}^{(k)} &= \mathbf{x} \left (\mathbf{U\Sigma U}^{-1} \right ) \left (\mathbf{U\Sigma U}^{-1} \right )\cdots \left (\mathbf{U\Sigma U}^{-1} \right ) \\ &= \mathbf{xU\Sigma}^k \mathbf{U}^{-1} \\ &= \left (a_1\mathbf{u}_1^\mathsf{T} + a_2\mathbf{u}_2^\mathsf{T} + \cdots + a_n\mathbf{u}_n^\mathsf{T} \right )\mathbf{U\Sigma}^k\mathbf{U}^{-1} \\ &= a_1\lambda_1^k\mathbf{u}_1^\mathsf{T} + a_2\lambda_2^k\mathbf{u}_2^\mathsf{T} + \cdots + a_n\lambda_n^k\mathbf{u}_n^\mathsf{T} && u_i \bot u_j \text{ for } i\neq j \\ & = \lambda_1^k\left\{a_1\mathbf{u}_1^\mathsf{T} + a_2\left(\frac{\lambda_2}{\lambda_1}\right)^k\mathbf{u}_2^\mathsf{T} + a_3\left(\frac{\lambda_3}{\lambda_1}\right)^k\mathbf{u}_3^\mathsf{T} + \cdots + a_n\left(\frac{\lambda_n}{\lambda_1}\right)^k\mathbf{u}_n^\mathsf{T}\right\} \end{align}</math> Since '''{{pi}}''' is parallel to '''u'''<sub>1</sub>(normalized by L2 norm) and '''{{pi}}'''<sup>(''k'')</sup> is a probability vector, '''{{pi}}'''<sup>(''k'')</sup> approaches to '''a'''<sub>1</sub> '''u'''<sub>1</sub> = '''{{pi}}''' as ''k'' → ∞ with a speed in the order of ''λ''<sub>2</sub>/''λ''<sub>1</sub> exponentially. This follows because <math> |\lambda_2| \geq \cdots \geq |\lambda_n|,</math> hence ''λ''<sub>2</sub>/''λ''<sub>1</sub> is the dominant term. The smaller the ratio is, the faster the convergence is.<ref>{{Cite journal | volume = 37 | issue = 3| pages = 387–405| last = Rosenthal| first = Jeffrey S.| title = Convergence Rates for Markov Chains| journal = SIAM Review| accessdate = 2021-05-31| date = 1995| doi = 10.1137/1037083| url = https://www.jstor.org/stable/2132659| jstor = 2132659}}</ref> Random noise in the state distribution '''{{pi}}''' can also speed up this convergence to the stationary distribution.<ref>{{cite journal|last=Franzke|first=Brandon|author2=Kosko, Bart|date=1 October 2011|title=Noise can speed convergence in Markov chains|journal=Physical Review E|volume=84|issue=4|pages=041112|bibcode=2011PhRvE..84d1112F|doi=10.1103/PhysRevE.84.041112|pmid=22181092}}</ref> ===General state space=== {{main|Markov chains on a measurable state space}} ====Harris chains==== Many results for Markov chains with finite state space can be generalized to chains with uncountable state space through [[Harris chain]]s. The use of Markov chains in Markov chain Monte Carlo methods covers cases where the process follows a continuous state space. ====Locally interacting Markov chains==== "Locally interacting Markov chains" are Markov chains with an evolution that takes into account the state of other Markov chains. This corresponds to the situation when the state space has a (Cartesian-) product form. See [[interacting particle system]] and [[stochastic cellular automata]] (probabilistic cellular automata). See for instance ''Interaction of Markov Processes''<ref>{{cite journal|last=Spitzer|first=Frank|year=1970|title=Interaction of Markov Processes|journal=[[Advances in Mathematics]]|volume=5|issue=2|pages=246–290|doi=10.1016/0001-8708(70)90034-4|doi-access=free}}</ref> or.<ref>{{cite book |url=https://books.google.com/books?id=0Wa7AAAAIAAJ&q=locally+interacting+markov+chains+toom+Dobrushin&pg=PA181 |title=Stochastic Cellular Systems: Ergodicity, Memory, Morphogenesis|last1=Dobrushin| first1=R. L.| authorlink1=Roland Dobrushin|last2=Kryukov| first2=V.I.|last3=Toom|first3=A. L.|year=1978|publisher=Manchester University Press|isbn=9780719022067|access-date=2016-03-04}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)