Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Markov chain
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Definition === A Markov process is a [[stochastic process]] that satisfies the [[Markov property]] (sometimes characterized as "[[memorylessness]]"). In simpler terms, it is a process for which predictions can be made regarding future outcomes based solely on its present state and—most importantly—such predictions are just as good as the ones that could be made knowing the process's full history.<ref name=":3">{{Cite book|title=Stochastic differential equations : an introduction with applications|author=Øksendal, B. K. (Bernt Karsten) |date=2003|publisher=Springer|isbn=3540047581|edition=6th|location=Berlin|oclc=52203046}}</ref> In other words, [[conditional probability|conditional]] on the present state of the system, its future and past states are [[independence (probability theory)|independent]]. A Markov chain is a type of Markov process that has either a discrete [[state space]] or a discrete index set (often representing time), but the precise definition of a Markov chain varies.<ref name="Asmussen2003page73">{{cite book|url=https://books.google.com/books?id=BeYaTxesKy0C|title=Applied Probability and Queues|date=15 May 2003|publisher=Springer Science & Business Media|isbn=978-0-387-00211-8|page=7|author=Søren Asmussen}}</ref> For example, it is common to define a Markov chain as a Markov process in either [[continuous or discrete variable|discrete or continuous time]] with a countable state space (thus regardless of the nature of time),<ref name="Parzen1999page1882">{{cite book|url=https://books.google.com/books?id=0mB2CQAAQBAJ|title=Stochastic Processes|date=17 June 2015|publisher=Courier Dover Publications|isbn=978-0-486-79688-8|page=188|author=Emanuel Parzen}}</ref><ref name="KarlinTaylor2012page292">{{cite book|url=https://books.google.com/books?id=dSDxjX9nmmMC|title=A First Course in Stochastic Processes|date=2 December 2012|publisher=Academic Press|isbn=978-0-08-057041-9|pages=29 and 30|author1=Samuel Karlin|author2=Howard E. Taylor}}</ref><ref name="Lamperti1977chap62">{{cite book|url=https://books.google.com/books?id=Pd4cvgAACAAJ|title=Stochastic processes: a survey of the mathematical theory|publisher=Springer-Verlag|year=1977|isbn=978-3-540-90275-1|pages=106–121|author=John Lamperti}}</ref><ref name="Ross1996page174and2312">{{cite book|url=https://books.google.com/books?id=ImUPAQAAMAAJ|title=Stochastic processes|publisher=Wiley|year=1996|isbn=978-0-471-12062-9|pages=174 and 231|author=Sheldon M. Ross}}</ref> but it is also common to define a Markov chain as having discrete time in either countable or continuous state space (thus regardless of the state space).<ref name="Asmussen2003page73" />
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)