Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Markov chain
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Examples== {{Main|Examples of Markov chains}} *[[Mark V. Shaney]] is a third-order Markov chain program, and a [[Markov text]] generator. It ingests the sample text (the [[Tao Te Ching]], or the posts of a [[Usenet]] group) and creates a massive list of every sequence of three successive words (triplet) which occurs in the text. It then chooses two words at random, and looks for a word which follows those two in one of the triplets in its massive list. If there is more than one, it picks at random (identical triplets count separately, so a sequence which occurs twice is twice as likely to be picked as one which only occurs once). It then adds that word to the generated text. Then, in the same way, it picks a triplet that starts with the second and third words in the generated text, and that gives a fourth word. It adds the fourth word, then repeats with the third and fourth words, and so on.<ref name="curious">{{cite web |last1=Subramanian |first1=Devika |title=The curious case of Mark V. Shaney |url=https://www.cs.rice.edu/~devika/comp140/Shaney.pdf |work=Comp 140 course notes, Fall 2008| publisher=William Marsh Rice University |department=Computer Science |date=Fall 2008 |access-date=30 November 2024}}</ref> *[[Random walk]]s based on integers and the [[gambler's ruin]] problem are examples of Markov processes.<ref name="Florescu2014page3732">{{cite book|url=https://books.google.com/books?id=Z5xEBQAAQBAJ&pg=PR22|title=Probability and Stochastic Processes|author=Ionut Florescu|date=7 November 2014|publisher=John Wiley & Sons|isbn=978-1-118-59320-2|pages=373 and 374 }}</ref><ref name="KarlinTaylor2012page492">{{cite book|url=https://books.google.com/books?id=dSDxjX9nmmMC|title=A First Course in Stochastic Processes|author1=Samuel Karlin|author2=Howard E. Taylor|date=2 December 2012|publisher=Academic Press|isbn=978-0-08-057041-9|page=49}}</ref> Some variations of these processes were studied hundreds of years earlier in the context of independent variables.<ref name="Weiss2006page12">{{cite book|title=Encyclopedia of Statistical Sciences|last1=Weiss|first1=George H.|year=2006|isbn=978-0471667193|page=1|chapter=Random Walks|doi=10.1002/0471667196.ess2180.pub2}}</ref><ref name="Shlesinger1985page82">{{cite book|url=https://books.google.com/books?id=p6fvAAAAMAAJ|title=The Wonderful world of stochastics: a tribute to Elliott W. Montroll|author=Michael F. Shlesinger|publisher=North-Holland|year=1985|isbn=978-0-444-86937-1|pages=8β10}}</ref> Two important examples of Markov processes are the [[Wiener process]], also known as the [[Brownian motion]] process, and the [[Poisson process]],<ref name="Ross1996page235and3583" /> which are considered the most important and central stochastic processes in the theory of stochastic processes.<ref name="Parzen19992">{{cite book|url=https://books.google.com/books?id=0mB2CQAAQBAJ|title=Stochastic Processes|author=Emanuel Parzen|date=17 June 2015|publisher=Courier Dover Publications|isbn=978-0-486-79688-8|page=7, 8 }}</ref><ref name="doob1953stochasticP46to472">{{cite book|url=https://books.google.com/books?id=7Bu8jgEACAAJ|title=Stochastic processes|author=Joseph L. Doob|publisher=Wiley|year=1990|page=46, 47}}</ref><ref>{{cite book|url=https://books.google.com/books?id=c_3UBwAAQBAJ|title=Random Point Processes in Time and Space|author1=Donald L. Snyder|author2=Michael I. Miller|date=6 December 2012|publisher=Springer Science & Business Media|isbn=978-1-4612-3166-0|page=32}}</ref> These two processes are Markov processes in continuous time, while random walks on the integers and the gambler's ruin problem are examples of Markov processes in discrete time.<ref name="Florescu2014page3732" /><ref name="KarlinTaylor2012page492" /> *A famous Markov chain is the so-called "drunkard's walk", a random walk on the [[number line]] where, at each step, the position may change by +1 or β1 with equal probability. From any position there are two possible transitions, to the next or previous integer. The transition probabilities depend only on the current position, not on the manner in which the position was reached. For example, the transition probabilities from 5 to 4 and 5 to 6 are both 0.5, and all other transition probabilities from 5 are 0. These probabilities are independent of whether the system was previously in 4 or 6. *A series of independent states (for example, a series of coin flips) satisfies the formal definition of a Markov chain. However, the theory is usually applied only when the probability distribution of the next state depends on the current one. ===A non-Markov example=== Suppose that there is a coin purse containing five coins worth 25Β’, five coins worth 10Β’ and five coins worth 5Β’, and one by one, coins are randomly drawn from the purse and are set on a table. If <math>X_n</math> represents the total value of the coins set on the table after {{mvar|n}} draws, with <math>X_0 = 0</math>, then the sequence <math>\{X_n : n\in\mathbb{N}\}</math> is ''not'' a Markov process. To see why this is the case, suppose that in the first six draws, all five nickels and a quarter are drawn. Thus <math>X_6 = \$0.50</math>. If we know not just <math>X_6</math>, but the earlier values as well, then we can determine which coins have been drawn, and we know that the next coin will not be a nickel; so we can determine that <math>X_7 \geq \$0.60</math> with probability 1. But if we do not know the earlier values, then based only on the value <math>X_6</math> we might guess that we had drawn four dimes and two nickels, in which case it would certainly be possible to draw another nickel next. Thus, our guesses about <math>X_7</math> are impacted by our knowledge of values prior to <math>X_6</math>. However, it is possible to model this scenario as a Markov process. Instead of defining <math>X_n</math> to represent the ''total value'' of the coins on the table, we could define <math>X_n</math> to represent the ''count'' of the various coin types on the table. For instance, <math>X_6 = 1,0,5</math> could be defined to represent the state where there is one quarter, zero dimes, and five nickels on the table after 6 one-by-one draws. This new model could be represented by <math>6\times 6\times 6=216</math> possible states, where each state represents the number of coins of each type (from 0 to 5) that are on the table. (Not all of these states are reachable within 6 draws.) Suppose that the first draw results in state <math>X_1 = 0,1,0</math>. The probability of achieving <math>X_2</math> now depends on <math>X_1</math>; for example, the state <math>X_2 = 1,0,1</math> is not possible. After the second draw, the third draw depends on which coins have so far been drawn, but no longer only on the coins that were drawn for the first state (since probabilistically important information has since been added to the scenario). In this way, the likelihood of the <math>X_n = i,j,k</math> state depends exclusively on the outcome of the <math>X_{n-1}= \ell,m,p</math> state.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)