Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Markov property
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Examples== Assume that an urn contains two red balls and one green ball. One ball was drawn yesterday, one ball was drawn today, and the final ball will be drawn tomorrow. All of the draws are "without replacement". Suppose you know that today's ball was red, but you have no information about yesterday's ball. The chance that tomorrow's ball will be red is 1/2. That's because the only two remaining outcomes for this random experiment are: {| class="wikitable" |- ! align="center" | Day !! align="center" | Outcome 1 !! align="center" | Outcome 2 |- | align="center" | Yesterday || align="center" | Red || align="center" | Green |- | align="center" | Today || align="center" | Red || align="center" | Red |- | align="center"| Tomorrow || align="center" | Green || align="center" | Red |} On the other hand, if you know that both today and yesterday's balls were red, then you are guaranteed to get a green ball tomorrow. This discrepancy shows that the probability distribution for tomorrow's color depends not only on the present value, but is also affected by information about the past. This stochastic process of observed colors doesn't have the Markov property. Using the same experiment above, if sampling "without replacement" is changed to sampling "with replacement," the process of observed colors will have the Markov property.<ref>{{cite web|url=https://math.stackexchange.com/q/89394 |title=Example of a stochastic process which does not have the Markov property |publisher=[[Stack Exchange]] | access-date= 2020-07-07}}</ref> An application of the Markov property in a generalized form is in [[Markov chain Monte Carlo]] computations in the context of [[Bayesian statistics]].
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)