Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Markov chain
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Discrete-time Markov chain=== {{Main|Discrete-time Markov chain}} A discrete-time Markov chain is a sequence of [[random variable]]s ''X''<sub>1</sub>, ''X''<sub>2</sub>, ''X''<sub>3</sub>, ... with the [[Markov property]], namely that the probability of moving to the next state depends only on the present state and not on the previous states: :<math>\Pr(X_{n+1}=x\mid X_1=x_1, X_2=x_2, \ldots, X_n=x_n) = \Pr(X_{n+1}=x\mid X_n=x_n),</math> if both [[conditional probability|conditional probabilities]] are well defined, that is, if <math>\Pr(X_1=x_1,\ldots,X_n=x_n)>0.</math> The possible values of ''X''<sub>''i''</sub> form a [[countable set]] ''S'' called the state space of the chain. ====Variations==== *{{Anchor|homogeneous}}Time-homogeneous Markov chains are processes where <math display="block">\Pr(X_{n+1}=x\mid X_n=y) = \Pr(X_n = x \mid X_{n-1} = y)</math> for all ''n''. The probability of the transition is independent of ''n''. *Stationary Markov chains are processes where <math display="block">\Pr(X_{0}=x_0, X_{1} = x_1, \ldots, X_{k} = x_k) = \Pr(X_{n}=x_0, X_{n+1} = x_1, \ldots, X_{n+k} = x_k)</math> for all ''n'' and ''k''. Every stationary chain can be proved to be time-homogeneous by Bayes' rule.{{pb}}A necessary and sufficient condition for a time-homogeneous Markov chain to be stationary is that the distribution of <math>X_0</math> is a stationary distribution of the Markov chain. *A Markov chain with memory (or a Markov chain of order ''m'') where ''m'' is finite, is a process satisfying <math display="block"> \begin{align} {} &\Pr(X_n=x_n\mid X_{n-1}=x_{n-1}, X_{n-2}=x_{n-2}, \dots , X_1=x_1) \\ = &\Pr(X_n=x_n\mid X_{n-1}=x_{n-1}, X_{n-2}=x_{n-2}, \dots, X_{n-m}=x_{n-m}) \text{ for }n > m \end{align} </math> In other words, the future state depends on the past ''m'' states. It is possible to construct a chain <math>(Y_n)</math> from <math>(X_n)</math> which has the 'classical' Markov property by taking as state space the ordered ''m''-tuples of ''X'' values, i.e., <math>Y_n= \left( X_n,X_{n-1},\ldots,X_{n-m+1} \right)</math>.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)