Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Stochastic process
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Markov processes and chains=== {{Main|Markov chain}} Markov processes are stochastic processes, traditionally in [[Discrete time and continuous time|discrete or continuous time]], that have the Markov property, which means the next value of the Markov process depends on the current value, but it is conditionally independent of the previous values of the stochastic process. In other words, the behavior of the process in the future is stochastically independent of its behavior in the past, given the current state of the process.<ref name="Serfozo2009page2">{{cite book|author=Richard Serfozo|title=Basics of Applied Stochastic Processes|url=https://books.google.com/books?id=JBBRiuxTN0QC|year=2009|publisher=Springer Science & Business Media|isbn=978-3-540-89332-5|page=2}}</ref><ref name="Rozanov2012page58">{{cite book|author=Y.A. Rozanov|title=Markov Random Fields|url=https://books.google.com/books?id=wGUECAAAQBAJ|year=2012|publisher=Springer Science & Business Media|isbn=978-1-4613-8190-7|page=58}}</ref> The Brownian motion process and the Poisson process (in one dimension) are both examples of Markov processes<ref name="Ross1996page235and358">{{cite book|author=Sheldon M. Ross|title=Stochastic processes|url=https://books.google.com/books?id=ImUPAQAAMAAJ|year=1996|publisher=Wiley|isbn=978-0-471-12062-9|pages=235, 358}}</ref> in continuous time, while [[random walk]]s on the integers and the [[gambler's ruin]] problem are examples of Markov processes in discrete time.<ref name="Florescu2014page373">{{cite book|author=Ionut Florescu|title=Probability and Stochastic Processes|url=https://books.google.com/books?id=Z5xEBQAAQBAJ&pg=PR22|year=2014|publisher=John Wiley & Sons|isbn=978-1-118-59320-2|pages=373, 374}}</ref><ref name="KarlinTaylor2012page49">{{cite book|author1=Samuel Karlin|author2=Howard E. Taylor|title=A First Course in Stochastic Processes|url=https://books.google.com/books?id=dSDxjX9nmmMC|year=2012|publisher=Academic Press|isbn=978-0-08-057041-9|page=49}}</ref> A Markov chain is a type of Markov process that has either discrete [[state space]] or discrete index set (often representing time), but the precise definition of a Markov chain varies.<ref name="Asmussen2003page7">{{cite book|url=https://books.google.com/books?id=BeYaTxesKy0C|title=Applied Probability and Queues|year=2003|publisher=Springer Science & Business Media|isbn=978-0-387-00211-8|page=7|author=Sรธren Asmussen}}</ref> For example, it is common to define a Markov chain as a Markov process in either [[Continuous and discrete variables|discrete or continuous time]] with a countable state space (thus regardless of the nature of time),<ref name="Parzen1999page188">{{cite book|url=https://books.google.com/books?id=0mB2CQAAQBAJ|title=Stochastic Processes|year=2015|publisher=Courier Dover Publications|isbn=978-0-486-79688-8|page=188|author=Emanuel Parzen}}</ref><ref name="KarlinTaylor2012page29">{{cite book|url=https://books.google.com/books?id=dSDxjX9nmmMC|title=A First Course in Stochastic Processes|year=2012|publisher=Academic Press|isbn=978-0-08-057041-9|pages=29, 30|author1=Samuel Karlin|author2=Howard E. Taylor}}</ref><ref name="Lamperti1977chap6">{{cite book|url=https://books.google.com/books?id=Pd4cvgAACAAJ|title=Stochastic processes: a survey of the mathematical theory|publisher=Springer-Verlag|year=1977|isbn=978-3-540-90275-1|pages=106โ121|author=John Lamperti}}</ref><ref name="Ross1996page174and231">{{cite book|url=https://books.google.com/books?id=ImUPAQAAMAAJ|title=Stochastic processes|publisher=Wiley|year=1996|isbn=978-0-471-12062-9|pages=174, 231|author=Sheldon M. Ross}}</ref> but it has been also common to define a Markov chain as having discrete time in either countable or continuous state space (thus regardless of the state space).<ref name="Asmussen2003page7" /> It has been argued that the first definition of a Markov chain, where it has discrete time, now tends to be used, despite the second definition having been used by researchers like [[Joseph Doob]] and [[Kai Lai Chung]].<ref name="MeynTweedie2009">{{cite book|author1=Sean Meyn|author2=Richard L. Tweedie|title=Markov Chains and Stochastic Stability|url=https://books.google.com/books?id=Md7RnYEPkJwC|year=2009|publisher=Cambridge University Press|isbn=978-0-521-73182-9|page=19}}</ref> Markov processes form an important class of stochastic processes and have applications in many areas.<ref name="LatoucheRamaswami1999"/><ref name="KarlinTaylor2012page47">{{cite book|author1=Samuel Karlin|author2=Howard E. Taylor|title=A First Course in Stochastic Processes|url=https://books.google.com/books?id=dSDxjX9nmmMC|year=2012|publisher=Academic Press|isbn=978-0-08-057041-9|page=47}}</ref> For example, they are the basis for a general stochastic simulation method known as [[Markov chain Monte Carlo]], which is used for simulating random objects with specific probability distributions, and has found application in [[Bayesian statistics]].<ref name="RubinsteinKroese2011page225">{{cite book|author1=Reuven Y. Rubinstein|author2=Dirk P. Kroese|title=Simulation and the Monte Carlo Method|url=https://books.google.com/books?id=yWcvT80gQK4C|year=2011|publisher=John Wiley & Sons|isbn=978-1-118-21052-9|page=225}}</ref><ref name="GamermanLopes2006">{{cite book|author1=Dani Gamerman|author2=Hedibert F. Lopes|title=Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference, Second Edition|url=https://books.google.com/books?id=yPvECi_L3bwC|year=2006|publisher=CRC Press|isbn=978-1-58488-587-0}}</ref> The concept of the Markov property was originally for stochastic processes in continuous and discrete time, but the property has been adapted for other index sets such as <math>n</math>-dimensional Euclidean space, which results in collections of random variables known as Markov random fields.<ref name="Rozanov2012page61">{{cite book|author=Y.A. Rozanov|title=Markov Random Fields|url=https://books.google.com/books?id=wGUECAAAQBAJ|year=2012|publisher=Springer Science & Business Media|isbn=978-1-4613-8190-7|page=61}}</ref><ref>{{cite book|author1=Donald L. Snyder|author2=Michael I. Miller|title=Random Point Processes in Time and Space|url=https://books.google.com/books?id=c_3UBwAAQBAJ|year=2012|publisher=Springer Science & Business Media|isbn=978-1-4612-3166-0|page=27}}</ref><ref name="Bremaud2013page253">{{cite book|author=Pierre Bremaud|title=Markov Chains: Gibbs Fields, Monte Carlo Simulation, and Queues|url=https://books.google.com/books?id=jrPVBwAAQBAJ|year=2013|publisher=Springer Science & Business Media|isbn=978-1-4757-3124-8|page=253}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)