Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Markov chain
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Principles== [[File:AAMarkov.jpg|right|thumb|286x286px|Russian mathematician [[Andrey Markov]]]] === Definition === A Markov process is a [[stochastic process]] that satisfies the [[Markov property]] (sometimes characterized as "[[memorylessness]]"). In simpler terms, it is a process for which predictions can be made regarding future outcomes based solely on its present state and—most importantly—such predictions are just as good as the ones that could be made knowing the process's full history.<ref name=":3">{{Cite book|title=Stochastic differential equations : an introduction with applications|author=Øksendal, B. K. (Bernt Karsten) |date=2003|publisher=Springer|isbn=3540047581|edition=6th|location=Berlin|oclc=52203046}}</ref> In other words, [[conditional probability|conditional]] on the present state of the system, its future and past states are [[independence (probability theory)|independent]]. A Markov chain is a type of Markov process that has either a discrete [[state space]] or a discrete index set (often representing time), but the precise definition of a Markov chain varies.<ref name="Asmussen2003page73">{{cite book|url=https://books.google.com/books?id=BeYaTxesKy0C|title=Applied Probability and Queues|date=15 May 2003|publisher=Springer Science & Business Media|isbn=978-0-387-00211-8|page=7|author=Søren Asmussen}}</ref> For example, it is common to define a Markov chain as a Markov process in either [[continuous or discrete variable|discrete or continuous time]] with a countable state space (thus regardless of the nature of time),<ref name="Parzen1999page1882">{{cite book|url=https://books.google.com/books?id=0mB2CQAAQBAJ|title=Stochastic Processes|date=17 June 2015|publisher=Courier Dover Publications|isbn=978-0-486-79688-8|page=188|author=Emanuel Parzen}}</ref><ref name="KarlinTaylor2012page292">{{cite book|url=https://books.google.com/books?id=dSDxjX9nmmMC|title=A First Course in Stochastic Processes|date=2 December 2012|publisher=Academic Press|isbn=978-0-08-057041-9|pages=29 and 30|author1=Samuel Karlin|author2=Howard E. Taylor}}</ref><ref name="Lamperti1977chap62">{{cite book|url=https://books.google.com/books?id=Pd4cvgAACAAJ|title=Stochastic processes: a survey of the mathematical theory|publisher=Springer-Verlag|year=1977|isbn=978-3-540-90275-1|pages=106–121|author=John Lamperti}}</ref><ref name="Ross1996page174and2312">{{cite book|url=https://books.google.com/books?id=ImUPAQAAMAAJ|title=Stochastic processes|publisher=Wiley|year=1996|isbn=978-0-471-12062-9|pages=174 and 231|author=Sheldon M. Ross}}</ref> but it is also common to define a Markov chain as having discrete time in either countable or continuous state space (thus regardless of the state space).<ref name="Asmussen2003page73" /> === Types of Markov chains === The system's [[state space]] and time parameter index need to be specified. The following table gives an overview of the different instances of Markov processes for different levels of state space generality and for discrete time v. continuous time: {| class="wikitable" style="width: 60%;" ! scope="col" | ! scope="col" |Countable state space ! scope="col" |Continuous or general state space |- ! scope="row" |Discrete-time |(discrete-time) Markov chain on a countable or finite state space |[[Markov chains on a measurable state space|Markov chain on a measurable state space]] (for example, [[Harris chain]]) |- ! scope="row" style="width: 10%;" |Continuous-time |style="width: 25%;" |Continuous-time Markov process or Markov jump process |style="width: 25%;" |Any [[continuous stochastic process]] with the Markov property (for example, the [[Wiener process]]) |} Note that there is no definitive agreement in the literature on the use of some of the terms that signify special cases of Markov processes. Usually the term "Markov chain" is reserved for a process with a discrete set of times, that is, a '''discrete-time Markov chain (DTMC)''',<ref name="Everitt, B.S. 2002">Everitt, B.S. (2002) ''The Cambridge Dictionary of Statistics''. CUP. {{ISBN|0-521-81099-X}}</ref> but a few authors use the term "Markov process" to refer to a '''continuous-time Markov chain (CTMC)''' without explicit mention.<ref>Parzen, E. (1962) ''Stochastic Processes'', Holden-Day. {{ISBN|0-8162-6664-6}} (Table 6.1)</ref><ref>Dodge, Y. (2003) ''The Oxford Dictionary of Statistical Terms'', OUP. {{ISBN|0-19-920613-9}} (entry for "Markov chain")</ref><ref>Dodge, Y. ''The Oxford Dictionary of Statistical Terms'', OUP. {{ISBN|0-19-920613-9}}</ref> In addition, there are other extensions of Markov processes that are referred to as such but do not necessarily fall within any of these four categories (see [[Markov model]]). Moreover, the time index need not necessarily be real-valued; like with the state space, there are conceivable processes that move through index sets with other mathematical constructs. Notice that the general state space continuous-time Markov chain is general to such a degree that it has no designated term. While the time parameter is usually discrete, the [[state space]] of a Markov chain does not have any generally agreed-on restrictions: the term may refer to a process on an arbitrary state space.<ref>Meyn, S. Sean P., and Richard L. Tweedie. (2009) ''Markov chains and stochastic stability''. Cambridge University Press. (Preface, p. iii)</ref> However, many applications of Markov chains employ finite or [[countable set|countably infinite]] state spaces, which have a more straightforward statistical analysis. Besides time-index and state-space parameters, there are many other variations, extensions and generalizations (see [[#Variations|Variations]]). For simplicity, most of this article concentrates on the discrete-time, discrete state-space case, unless mentioned otherwise. === Transitions === The changes of state of the system are called transitions. The probabilities associated with various state changes are called transition probabilities. The process is characterized by a state space, a [[stochastic matrix|transition matrix]] describing the probabilities of particular transitions, and an initial state (or initial distribution) across the state space. By convention, we assume all possible states and transitions have been included in the definition of the process, so there is always a next state, and the process does not terminate. A discrete-time random process involves a system which is in a certain state at each step, with the state changing randomly between steps. The steps are often thought of as moments in time, but they can equally well refer to physical distance or any other discrete measurement. Formally, the steps are the [[integers]] or [[natural numbers]], and the random process is a mapping of these to states. The Markov property states that the [[conditional probability distribution]] for the system at the next step (and in fact at all future steps) depends only on the current state of the system, and not additionally on the state of the system at previous steps. Since the system changes randomly, it is generally impossible to predict with certainty the state of a Markov chain at a given point in the future. However, the statistical properties of the system's future can be predicted. In many applications, it is these statistical properties that are important.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)