Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Examples of Markov chains
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Discrete-time == === Board games played with dice === A game of [[snakes and ladders]] or any other game whose moves are determined entirely by [[dice]] is a Markov chain, indeed, an [[absorbing Markov chain]]. This is in contrast to card games such as [[blackjack]], where the cards represent a 'memory' of the past moves. To see the difference, consider the probability for a certain event in the game. In the above-mentioned dice games, the only thing that matters is the current state of the board. The next state of the board depends on the current state, and the next roll of the dice. It does not depend on how things got to their current state. In a game such as blackjack, a player can gain an advantage by remembering which cards have already been shown (and hence which cards are no longer in the deck), so the next state (or hand) of the game is not independent of the past states. === Random walk Markov chains === {{See also|Random walk}} ==== A center-biased random walk ==== Consider a [[random walk]] on the number line where, at each step, the position (call it ''x'') may change by +1 (to the right) or −1 (to the left) with probabilities: : <math>P_{\mathrm{move~left}} = \dfrac{1}{2} + \dfrac{1}{2} \left( \dfrac{x}{c+|x|} \right) </math> : <math>P_{\mathrm{move~right}} = 1 - P_{\mathrm{move~left}}</math> (where ''c'' is a constant greater than 0) For example, if the constant, ''c'', equals 1, the probabilities of a move to the left at positions ''x'' = −2,−1,0,1,2 are given by <math>\dfrac{1}{6},\dfrac{1}{4},\dfrac{1}{2},\dfrac{3}{4},\dfrac{5}{6}</math> respectively. The random walk has a centering effect that weakens as ''c'' increases. Since the probabilities depend only on the current position (value of ''x'') and not on any prior positions, this biased random walk satisfies the definition of a Markov chain. ==== Gambling ==== {{See also|Gambler's ruin}} Suppose that one starts with $10, and one wagers $1 on an unending, fair, coin toss indefinitely, or until all of the money is lost. If <math>X_n</math> represents the number of dollars one has after ''n'' tosses, with <math>X_0 = 10</math>, then the sequence <math>\{X_n : n \in \mathbb{N} \}</math> is a Markov process. If one knows that one has $12 now, then it would be expected that with even odds, one will either have $11 or $13 after the next toss. This guess is not improved by the added knowledge that one started with $10, then went up to $11, down to $10, up to $11, and then to $12. The fact that the guess is not improved by the knowledge of earlier tosses showcases the [[Markov property]], the memoryless property of a stochastic process.<ref name=":3">{{Cite book|title=Stochastic differential equations : an introduction with applications|last=Øksendal, B. K. (Bernt Karsten), 1945-|date=2003|publisher=Springer|isbn=3540047581|edition=6th|location=Berlin|oclc=52203046}}</ref> === A model of language === This example came from Markov himself.<ref>Markov, A. A. "An example of statistical analysis of the text of eugene onegin illustrating the association of trials into a Chain." ''Bulletin de lAcadamie Imperiale des Sciences de St. Petersburg, ser'' 6 (1913): 153162.</ref> Markov chose 20,000 letters from Pushkin’s ''[[Eugene Onegin]]'', classified them into vowels and consonants, and counted the transition probabilities.<math display="block">\begin{array}{lll} & \text{vowel} & \text{consonant} \\ \text{vowel} & .128 & .872 \\ \text{consonant} & .663 & .337 \end{array}</math>The stationary distribution is 43.2 percent vowels and 56.8 percent consonants, which is close to the actual count in the book.<ref>''[https://math.dartmouth.edu/~prob/prob/prob.pdf Grinstead and Snell’s Introduction to Probability]'', page 465</ref> === A simple weather model === The probabilities of weather conditions (modeled as either rainy or sunny), given the weather on the preceding day, can be represented by a [[Stochastic matrix|transition matrix]]: : <math> P = \begin{bmatrix} 0.9 & 0.1 \\ 0.5 & 0.5 \end{bmatrix} </math> The matrix ''P'' represents the weather model in which a sunny day is 90% likely to be followed by another sunny day, and a rainy day is 50% likely to be followed by another rainy day. The columns can be labelled "sunny" and "rainy", and the rows can be labelled in the same order. [[File:Markov Chain weather model matrix as a graph.png|thumbnail|The above matrix as a graph.]] (''P'')<sub>''i j''</sub> is the probability that, if a given day is of type ''i'', it will be followed by a day of type ''j''. Notice that the rows of ''P'' sum to 1: this is because ''P'' is a [[stochastic matrix]]. ==== Predicting the weather ==== The weather on day 0 (today) is known to be sunny. This is represented by an initial state vector in which the "sunny" entry is 100%, and the "rainy" entry is 0%: : <math> \mathbf{x}^{(0)} = \begin{bmatrix} 1 & 0 \end{bmatrix} </math> The weather on day 1 (tomorrow) can be predicted by multiplying the state vector from day 0 by the transition matrix: : <math> \mathbf{x}^{(1)} = \mathbf{x}^{(0)} P = \begin{bmatrix} 1 & 0 \end{bmatrix} \begin{bmatrix} 0.9 & 0.1 \\ 0.5 & 0.5 \end{bmatrix} = \begin{bmatrix} 0.9 & 0.1 \end{bmatrix} </math> Thus, there is a 90% chance that day 1 will also be sunny. The weather on day 2 (the day after tomorrow) can be predicted in the same way, from the state vector we computed for day 1: : <math> \mathbf{x}^{(2)} =\mathbf{x}^{(1)} P = \mathbf{x}^{(0)} P^2 = \begin{bmatrix} 1 & 0 \end{bmatrix} \begin{bmatrix} 0.9 & 0.1 \\ 0.5 & 0.5 \end{bmatrix}^2 = \begin{bmatrix} 0.86 & 0.14 \end{bmatrix} </math> or : <math> \mathbf{x}^{(2)} =\mathbf{x}^{(1)} P = \begin{bmatrix} 0.9 & 0.1 \end{bmatrix} \begin{bmatrix} 0.9 & 0.1 \\ 0.5 & 0.5 \end{bmatrix} = \begin{bmatrix} 0.86 & 0.14 \end{bmatrix} </math> General rules for day ''n'' are: : <math> \mathbf{x}^{(n)} = \mathbf{x}^{(n-1)} P </math> : <math> \mathbf{x}^{(n)} = \mathbf{x}^{(0)} P^n </math> ==== Steady state of the weather ==== In this example, predictions for the weather on more distant days change less and less on each subsequent day and tend towards a [https://www.math.drexel.edu/~jwd25/LM_SPRING_07/lectures/Markov.html steady state vector]. This vector represents the probabilities of sunny and rainy weather on all days, and is independent of the initial weather. The steady state vector is defined as: :<math> \mathbf{q} = \lim_{n \to \infty} \mathbf{x}^{(n)} </math> but converges to a strictly positive vector only if ''P'' is a regular transition matrix (that is, there is at least one ''P''<sup>''n''</sup> with all non-zero entries). Since '''q''' is independent from initial conditions, it must be unchanged when transformed by ''P''.<ref name=":1">{{Cite book |last=Van Kampen |first=N.G. |url=https://archive.org/details/stochasticproces00kamp_024 |title=Stochastic Processes in Physics and Chemistry |publisher=North Holland Elsevier |year=2007 |isbn=978-0-444-52965-7 |location=NL |pages=[https://archive.org/details/stochasticproces00kamp_024/page/n79 73]–95 |url-access=limited}}</ref> This makes it an [[eigenvector]] (with [[eigenvalue]] 1), and means it can be derived from ''P''. In layman's terms, the steady-state vector is the vector that, when we multiply it by ''P'', we get the exact same vector back.<ref>{{cite web |url=https://bloomingtontutors.com/blog/going-steady-state-with-markov-processes |title=Going steady (state) with Markov processes |publisher=Bloomington Tutors}}</ref> For the weather example, we can use this to set up a matrix equation: :<math> \begin{align} P & = \begin{bmatrix} 0.9 & 0.1 \\ 0.5 & 0.5 \end{bmatrix} \\ \mathbf{q} P & = \mathbf{q} & & \text{(} \mathbf{q} \text{ is unchanged by } P \text{.)} \\ & = \mathbf{q}I \\ \mathbf{q} (P - I) & = \mathbf{0} \\ \mathbf{q} \left( \begin{bmatrix} 0.9 & 0.1 \\ 0.5 & 0.5 \end{bmatrix} - \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \right) & = \mathbf{0} \\ \mathbf{q} \begin{bmatrix} -0.1 & 0.1 \\ 0.5 & -0.5 \end{bmatrix} & = \mathbf{0} \\ \begin{bmatrix} q_1 & q_2 \end{bmatrix} \begin{bmatrix} -0.1 & 0.1 \\ 0.5 & -0.5 \end{bmatrix} & = \begin{bmatrix} 0 & 0 \end{bmatrix} \\ -0.1 q_1 + 0.5 q_2 &= 0 \end{align} </math> and since they are a probability vector we know that :<math> q_1 + q_2 = 1. </math> Solving this pair of simultaneous equations gives the steady state vector: :<math> \begin{bmatrix} q_1 & q_2 \end{bmatrix} = \begin{bmatrix} 0.833 & 0.167 \end{bmatrix} </math> In conclusion, in the long term about 83.3% of days are sunny. Not all Markov processes have a steady state vector. In particular, the transition matrix must be '''regular'''. Otherwise, the state vectors will oscillate over time without converging. === Stock market === [[File:Finance_Markov_chain_example_state_space.svg|right|thumb|400x400px|Using a directed graph, the probabilities of the possible states a hypothetical stock market can exhibit is represented. The matrix on the left shows how probabilities corresponding to different states can be arranged in matrix form.]] A [[state diagram]] for a simple example is shown in the figure on the right, using a directed graph to picture the [[State transition|state transitions]]. The states represent whether a hypothetical stock market is exhibiting a [[bull market]], [[bear market]], or stagnant market trend during a given week. According to the figure, a bull week is followed by another bull week 90% of the time, a bear week 7.5% of the time, and a stagnant week the other 2.5% of the time. Labeling the state space {1 = bull, 2 = bear, 3 = stagnant} the transition matrix for this example is : <math>P = \begin{bmatrix} 0.9 & 0.075 & 0.025 \\ 0.15 & 0.8 & 0.05 \\ 0.25 & 0.25 & 0.5 \end{bmatrix}.</math> The distribution over states can be written as a [[stochastic row vector]] {{mvar|x}} with the relation {{math|1=''x''<sup>(''n'' + 1)</sup> = ''x''<sup>(''n'')</sup>''P''}}. So if at time {{mvar|n}} the system is in state {{math|''x''<sup>(''n'')</sup>}}, then three time periods later, at time {{math|''n'' + 3}} the distribution is : <math>\begin{align} x^{(n+3)} &= x^{(n+2)} P = \left( x^{(n+1)} P \right) P \\\\ &= x^{(n+1)} P^2= \left( x^{(n)} P \right) P^2\\ &= x^{(n)} P^3 \\ \end{align}</math> In particular, if at time {{mvar|n}} the system is in state 2 (bear), then at time {{math|''n'' + 3}} the distribution is : <math>\begin{align} x^{(n+3)} &= \begin{bmatrix} 0 & 1 & 0 \end{bmatrix} \begin{bmatrix} 0.9 & 0.075 & 0.025 \\ 0.15 & 0.8 & 0.05 \\ 0.25 & 0.25 & 0.5 \end{bmatrix}^3 \\[5pt] &= \begin{bmatrix} 0 & 1 & 0 \end{bmatrix} \begin{bmatrix} 0.7745 & 0.17875 & 0.04675 \\ 0.3575 & 0.56825 & 0.07425 \\ 0.4675 & 0.37125 & 0.16125 \\ \end{bmatrix} \\[5pt] & = \begin{bmatrix} 0.3575 & 0.56825 & 0.07425 \end{bmatrix}. \end{align}</math> Using the transition matrix it is possible to calculate, for example, the long-term fraction of weeks during which the market is stagnant, or the average number of weeks it will take to go from a stagnant to a bull market. Using the transition probabilities, the steady-state probabilities indicate that 62.5% of weeks will be in a bull market, 31.25% of weeks will be in a bear market and 6.25% of weeks will be stagnant, since: : <math>\lim_{N\to \infty } \, P^N= \begin{bmatrix} 0.625 & 0.3125 & 0.0625 \\ 0.625 & 0.3125 & 0.0625 \\ 0.625 & 0.3125 & 0.0625 \end{bmatrix}</math> A thorough development and many examples can be found in the on-line monograph Meyn & Tweedie 2005.<ref name="MCSS">S. P. Meyn and R.L. Tweedie, 2005. [http://probability.ca/MT/BOOK.pdf Markov Chains and Stochastic Stability] {{webarchive|url=https://web.archive.org/web/20130903184125/http://probability.ca/MT/BOOK.pdf|date=2013-09-03}}</ref> A [[finite-state machine]] can be used as a representation of a Markov chain. Assuming a sequence of [[Independent and identically distributed random variables|independent and identically distributed]] input signals (for example, symbols from a binary alphabet chosen by coin tosses), if the machine is in state ''y'' at time ''n'', then the probability that it moves to state ''x'' at time ''n'' + 1 depends only on the current state.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)