Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Markov chain
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Random process independent of past history}} [[File:Markovkate_01.svg|right|thumb|upright=1.2|A diagram representing a two-state Markov process. The numbers are the probability of changing from one state to another state.]] {{Probability fundamentals}} In probability theory and statistics, a '''Markov chain''' or '''Markov process''' is a [[stochastic process]] describing a [[sequence]] of possible events in which the [[probability]] of each event depends only on the state attained in the previous event. Informally, this may be thought of as, "What happens next depends only on the state of affairs ''now''." A [[countably infinite]] sequence, in which the chain moves state at discrete time steps, gives a [[discrete-time Markov chain]] (DTMC). A [[continuous-time]] process is called a [[continuous-time Markov chain]] (CTMC). Markov processes are named in honor of the [[Russia]]n mathematician [[Andrey Markov]]. Markov chains have many applications as [[statistical model]]s of real-world processes.<ref name="MeynTweedie2009page3">{{cite book|url=https://books.google.com/books?id=Md7RnYEPkJwC|title=Markov Chains and Stochastic Stability|date=2 April 2009|publisher=Cambridge University Press|isbn=978-0-521-73182-9|page=3|author1=Sean Meyn|author2=Richard L. Tweedie}}</ref> They provide the basis for general stochastic simulation methods known as [[Markov chain Monte Carlo]], which are used for simulating sampling from complex [[probability distribution]]s, and have found application in areas including [[Bayesian statistics]], [[biology]], [[chemistry]], [[economics]], [[finance]], [[information theory]], [[physics]], [[signal processing]], and [[speech processing]].<ref name="MeynTweedie2009page3" /><ref name="RubinsteinKroese2011page225">{{cite book|url=https://books.google.com/books?id=yWcvT80gQK4C|title=Simulation and the Monte Carlo Method|date=20 September 2011|publisher=John Wiley & Sons|isbn=978-1-118-21052-9|page=225|author1=Reuven Y. Rubinstein|author2=Dirk P. Kroese }}</ref><ref name="GamermanLopes2006">{{cite book|url=https://books.google.com/books?id=yPvECi_L3bwC|title=Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference, Second Edition|date=10 May 2006|publisher=CRC Press|isbn=978-1-58488-587-0|author1=Dani Gamerman|author2=Hedibert F. Lopes}}</ref> The adjectives ''Markovian'' and ''Markov'' are used to describe something that is related to a Markov process.<ref name="OxfordMarkovian">{{cite OED|Markovian}}</ref> {{Toclimit|3}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)