Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Markov chain
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Applications== Markov chains have been employed in a wide range of topics across the natural and social sciences, and in technological applications. They have been used for forecasting in several areas: for example, price trends,<ref name="SLS">{{cite journal |first1=E.G. |last1=de Souza e Silva |first2=L.F.L. |last2=Legey |first3=E.A. |last3=de Souza e Silva |url=https://www.sciencedirect.com/science/article/pii/S0140988310001271 |title=Forecasting oil price trends using wavelets and hidden Markov models |journal=Energy Economics |volume=32 |year=2010|issue=6 |page=1507 |doi=10.1016/j.eneco.2010.08.006 |bibcode=2010EneEc..32.1507D }}</ref> wind power,<ref name="CGLT">{{cite journal |first1=A |last1=Carpinone |first2=M |last2=Giorgio |first3=R. |last3=Langella |first4=A. |last4=Testa |title=Markov chain modeling for very-short-term wind power forecasting |journal=Electric Power Systems Research |volume=122 |pages=152–158 |year=2015|doi=10.1016/j.epsr.2014.12.025 |doi-access=free |bibcode=2015EPSR..122..152C }}</ref> [[stochastic terrorism]],<ref name="Woo2002">{{Cite journal |last=Woo |first=Gordon |date=2002-04-01 |title=Quantitative Terrorism Risk Assessment |url=https://www.emerald.com/insight/content/doi/10.1108/eb022949/full/html |journal=The Journal of Risk Finance |volume=4 |issue=1 |pages=7–14 |doi=10.1108/eb022949 |access-date=5 October 2023 }}</ref><ref name="Woo2003">{{cite journal |last1=Woo |first1=Gordon |date=December 2003 |title=Insuring Against Al-Quaeda |url=https://conference.nber.org/confer/2003/insurance03/woo.pdf |journal=Cambridge: National Bureau of Economic Research |access-date=26 March 2024 |ref=Woo2003}}</ref> and [[solar irradiance]].<ref name="MMW">{{cite journal |first1=J. |last1=Munkhammar |first2=D.W. |last2=van der Meer |first3=J. |last3=Widén |title=Probabilistic forecasting of high-resolution clear-sky index time-series using a Markov-chain mixture distribution model |journal= Solar Energy |volume=184 |pages=688–695 |year=2019|doi=10.1016/j.solener.2019.04.014 |bibcode=2019SoEn..184..688M }}</ref> The Markov chain forecasting models utilize a variety of settings, from discretizing the time series,<ref name="CGLT" /> to hidden Markov models combined with wavelets,<ref name="SLS" /> and the Markov chain mixture distribution model (MCM).<ref name="MMW" /> ===Physics=== Markovian systems appear extensively in [[thermodynamics]] and [[statistical mechanics]], whenever probabilities are used to represent unknown or unmodelled details of the system, if it can be assumed that the dynamics are time-invariant, and that no relevant history need be considered which is not already included in the state description.<ref>{{cite web|title=Thermodynamics and Statistical Mechanics |first=Richard |last=Fitzpatrick |url=https://farside.ph.utexas.edu/teaching/sm1/Thermal.pdf |access-date=2017-06-02 }}</ref><ref name="auto1">{{Cite journal|last1=van Ravenzwaaij|first1=Don|last2=Cassey|first2=Pete|last3=Brown|first3=Scott D.|date=2016-03-11|title=A simple introduction to Markov Chain Monte–Carlo sampling |journal=Psychonomic Bulletin & Review |volume=25 |issue=1 |pages=143–154 |doi=10.3758/s13423-016-1015-8 |pmid=26968853 |pmc=5862921 }}</ref> For example, a thermodynamic state operates under a probability distribution that is difficult or expensive to acquire. Therefore, Markov Chain Monte Carlo method can be used to draw samples randomly from a black-box to approximate the probability distribution of attributes over a range of objects.<ref name="auto1"/> Markov chains are used in [[lattice QCD]] simulations.<ref>{{cite book |last1= Gattringer |first1= Christof |last2= Lang |first2= Christian B |title= Quantum Chromodynamics on the Lattice |volume= 788 |doi= 10.1007/978-3-642-01850-3 |url= https://www.springer.com/gb/book/9783642018497 |publisher= Springer-Verlag Berlin Heidelberg |year= 2010 |series= Lecture Notes in Physics |isbn= 978-3-642-01849-7}}</ref> ===Chemistry=== {{Image frame|content=<chem>{E} + \underset{Substrate\atop binding}{S <=> E}\overset{Catalytic\atop step}{S -> E} + P</chem> |align=left|width=200|caption=[[Michaelis-Menten kinetics]]. The enzyme (E) binds a substrate (S) and produces a product (P). Each reaction is a state transition in a Markov chain.}}A reaction network is a chemical system involving multiple reactions and chemical species. The simplest stochastic models of such networks treat the system as a continuous time Markov chain with the state being the number of molecules of each species and with reactions modeled as possible transitions of the chain.<ref>{{Citation|last1=Anderson|first1=David F.|title=Continuous Time Markov Chain Models for Chemical Reaction Networks|date=2011|work=Design and Analysis of Biomolecular Circuits|pages=3–42|publisher=Springer New York|isbn=9781441967657|last2=Kurtz|first2=Thomas G.|doi=10.1007/978-1-4419-6766-4_1}}</ref> Markov chains and continuous-time Markov processes are useful in chemistry when physical systems closely approximate the Markov property. For example, imagine a large number ''n'' of molecules in solution in state A, each of which can undergo a chemical reaction to state B with a certain average rate. Perhaps the molecule is an enzyme, and the states refer to how it is folded. The state of any single enzyme follows a Markov chain, and since the molecules are essentially independent of each other, the number of molecules in state A or B at a time is ''n'' times the probability a given molecule is in that state. The classical model of enzyme activity, [[Michaelis–Menten kinetics]], can be viewed as a Markov chain, where at each time step the reaction proceeds in some direction. While Michaelis-Menten is fairly straightforward, far more complicated reaction networks can also be modeled with Markov chains.<ref>{{Cite journal|last1=Du|first1=Chao|last2=Kou|first2=S. C.|date=September 2012|title=Correlation analysis of enzymatic reaction of a single protein molecule|journal=The Annals of Applied Statistics|volume=6|issue=3|pages=950–976|doi=10.1214/12-aoas541|pmid=23408514|pmc=3568780|bibcode=2012arXiv1209.6210D|arxiv=1209.6210}}</ref> An algorithm based on a Markov chain was also used to focus the fragment-based growth of chemicals [[in silico]] towards a desired class of compounds such as drugs or natural products.<ref>{{cite journal|title=FOG: Fragment Optimized Growth Algorithm for the de Novo Generation of Molecules occupying Druglike Chemical |last=Kutchukian |first=Peter |author2=Lou, David |author3=Shakhnovich, Eugene |journal=Journal of Chemical Information and Modeling |year=2009 |volume=49 |pages=1630–1642|doi=10.1021/ci9000458|pmid=19527020|issue=7}}</ref> As a molecule is grown, a fragment is selected from the nascent molecule as the "current" state. It is not aware of its past (that is, it is not aware of what is already bonded to it). It then transitions to the next state when a fragment is attached to it. The transition probabilities are trained on databases of authentic classes of compounds.<ref>{{Cite journal |last1=Kutchukian|first1=P.S.|last2=Lou |first2=D.|last3=Shakhnovich |first3=Eugene I.|date=2009-06-15 |title=FOG: Fragment Optimized Growth Algorithm for the de Novo Generation of Molecules Occupying Druglike Chemical Space |journal=Journal of Chemical Information and Modeling |volume=49|issue=7|pages=1630–1642 |doi=10.1021/ci9000458|pmid=19527020 }}</ref> Also, the growth (and composition) of [[copolymer]]s may be modeled using Markov chains. Based on the reactivity ratios of the monomers that make up the growing polymer chain, the chain's composition may be calculated (for example, whether monomers tend to add in alternating fashion or in long runs of the same monomer). Due to [[steric effects]], second-order Markov effects may also play a role in the growth of some polymer chains. Similarly, it has been suggested that the crystallization and growth of some epitaxial [[superlattice]] oxide materials can be accurately described by Markov chains.<ref>{{cite journal |last1= Kopp |first1= V. S. |last2= Kaganer |first2= V. M. |last3= Schwarzkopf |first3= J. |last4= Waidick |first4= F. |last5= Remmele |first5= T. |last6= Kwasniewski |first6= A. |last7= Schmidbauer |first7= M. |title= X-ray diffraction from nonperiodic layered structures with correlations: Analytical calculation and experiment on mixed Aurivillius films |doi= 10.1107/S0108767311044874 |journal= Acta Crystallographica Section A |volume= 68 |issue= Pt 1 |pages= 148–155 |year= 2011 |pmid= 22186291 |bibcode= 2012AcCrA..68..148K}}</ref> ===Biology=== Markov chains are used in various areas of biology. Notable examples include: * [[Phylogenetics]] and [[bioinformatics]], where most [[models of DNA evolution]] use continuous-time Markov chains to describe the [[nucleotide]] present at a given site in the [[genome]]. * [[Population dynamics]], where Markov chains are in particular a central tool in the theoretical study of [[matrix population models]]. * [[Neurobiology]], where Markov chains have been used, e.g., to simulate the mammalian neocortex.<ref>{{cite journal |last1=George |first1=Dileep |first2=Jeff |last2=Hawkins |year=2009 |title=Towards a Mathematical Theory of Cortical Micro-circuits|journal=PLOS Comput Biol |volume=5|issue=10|pages=e1000532|doi=10.1371/journal.pcbi.1000532 |editor1-last=Friston |editor1-first=Karl J.|pmid=19816557|pmc=2749218|bibcode=2009PLSCB...5E0532G |doi-access=free }}</ref> * [[Systems biology]], for instance with the modeling of viral infection of single cells.<ref>{{cite journal|last1=Gupta|first1=Ankur|last2=Rawlings|first2=James B.|date=April 2014|title=Comparison of Parameter Estimation Methods in Stochastic Chemical Kinetic Models: Examples in Systems Biology |journal=AIChE Journal|volume=60|issue=4|pages=1253–1268|doi=10.1002/aic.14409 |pmc=4946376|pmid=27429455|bibcode=2014AIChE..60.1253G }}</ref> * [[Compartmental models in epidemiology|Compartmental models]] for disease outbreak and epidemic modeling. ===Testing=== Several theorists have proposed the idea of the Markov chain statistical test (MCST), a method of conjoining Markov chains to form a "[[Markov blanket]]", arranging these chains in several recursive layers ("wafering") and producing more efficient test sets—samples—as a replacement for exhaustive testing.{{cn|date=June 2024}} ===Solar irradiance variability=== [[Solar irradiance]] variability assessments are useful for [[solar power]] applications. Solar irradiance variability at any location over time is mainly a consequence of the deterministic variability of the sun's path across the sky dome and the variability in cloudiness. The variability of accessible solar irradiance on Earth's surface has been modeled using Markov chains,<ref>{{cite journal|title=Simple procedure for generating sequences of daily radiation values using a library of Markov transition matrices |last1=Aguiar |first1=R. J. | last2 = Collares-Pereira | first2 = M. | last3 = Conde | first3 = J. P. | journal=Solar Energy |year=1988 |volume=40 |issue=3 |pages=269–279|doi=10.1016/0038-092X(88)90049-7|bibcode=1988SoEn...40..269A }}</ref><ref>{{cite journal|title=Synthetic generation of high temporal resolution solar radiation data using Markov models |last1=Ngoko |first1=B. O. | last2 = Sugihara | first2= H. | last3= Funaki | first3 = T. |journal=Solar Energy |year=2014 |volume=103 |pages=160–170|doi=10.1016/j.solener.2014.02.026|bibcode=2014SoEn..103..160N }}</ref><ref>{{cite journal|title=Stochastic generation of synthetic minutely irradiance time series derived from mean hourly weather observation data |last1=Bright |first1=J. M. | last2 = Smith | first2= C. I. | last3= Taylor | first3 = P. G. | last4= Crook | first4 = R. |journal=Solar Energy |year=2015 |volume=115 |pages=229–242|doi=10.1016/j.solener.2015.02.032|bibcode=2015SoEn..115..229B |doi-access=free }}</ref><ref>{{cite journal|title=An N-state Markov-chain mixture distribution model of the clear-sky index |last1=Munkhammar |first1=J. | last2 = Widén | first2= J. | journal=Solar Energy |year=2018 |volume=173 |pages=487–495|doi=10.1016/j.solener.2018.07.056|bibcode=2018SoEn..173..487M }}</ref> also including modeling the two states of clear and cloudiness as a two-state Markov chain.<ref>{{cite journal|title=The stochastic two-state solar irradiance model (STSIM) |last=Morf |first=H. |journal=Solar Energy |year=1998 |volume=62 |issue=2 |pages=101–112|doi=10.1016/S0038-092X(98)00004-8 |bibcode=1998SoEn...62..101M}}</ref><ref>{{cite journal|title=A Markov-chain probability distribution mixture approach to the clear-sky index|last1=Munkhammar |first1=J. |last2 = Widén| first2 = J. | journal=Solar Energy |year=2018 |volume=170 |pages=174–183|doi=10.1016/j.solener.2018.05.055|bibcode=2018SoEn..170..174M }}</ref> ===Speech recognition=== [[Hidden Markov model]]s have been used in [[speech recognition#Hidden Markov models|automatic speech recognition]] systems.<ref>{{Cite journal |last1=Mor |first1=Bhavya |last2=Garhwal |first2=Sunita |last3=Kumar |first3=Ajay |date=May 2021 |title=A Systematic Review of Hidden Markov Models and Their Applications |url=https://link.springer.com/10.1007/s11831-020-09422-4 |journal=Archives of Computational Methods in Engineering |language=en |volume=28 |issue=3 |pages=1429–1448 |doi=10.1007/s11831-020-09422-4 |issn=1134-3060}}</ref> ===Information theory=== Markov chains are used throughout information processing. [[Claude Shannon]]'s famous 1948 paper ''[[A Mathematical Theory of Communication]]'', which in a single step created the field of [[information theory]], opens by introducing the concept of [[information entropy|entropy]] by modeling texts in a natural language (such as English) as generated by an ergodic Markov process, where each letter may depend statistically on previous letters.<ref>{{ Citation | last = Thomsen | first = Samuel W. | date = 2009 | title = Some evidence concerning the genesis of Shannon's information theory | journal = Studies in History and Philosophy of Science | volume = 40 | issue = 1 | pages = 81–91 | doi = 10.1016/j.shpsa.2008.12.011 | bibcode = 2009SHPSA..40...81T }} </ref> Such idealized models can capture many of the statistical regularities of systems. Even without describing the full structure of the system perfectly, such signal models can make possible very effective [[data compression]] through [[entropy encoding]] techniques such as [[arithmetic coding]]. They also allow effective [[state estimation]] and [[pattern recognition]]. Markov chains also play an important role in [[reinforcement learning]]. Markov chains are also the basis for hidden Markov models, which are an important tool in such diverse fields as telephone networks (which use the [[Viterbi algorithm]] for error correction), speech recognition and [[bioinformatics]] (such as in rearrangements detection<ref name="rearrang">{{cite journal|last=Pratas|first=D|author2=Silva, R|author3= Pinho, A|author4= Ferreira, P|title=An alignment-free method to find and visualise rearrangements between pairs of DNA sequences|journal=Scientific Reports|date=May 18, 2015|volume=5|number=10203|pmid=25984837|doi=10.1038/srep10203|page=10203|pmc=4434998|bibcode=2015NatSR...510203P}}</ref>). The [[Lempel–Ziv–Markov chain algorithm|LZMA]] lossless data compression algorithm combines Markov chains with [[LZ77 and LZ78|Lempel-Ziv compression]] to achieve very high compression ratios. ===Queueing theory=== {{Main|Queueing theory}}Markov chains are the basis for the analytical treatment of queues ([[queueing theory]]). [[Agner Krarup Erlang]] initiated the subject in 1917.<ref name="MacTutor|id=Erlang">{{MacTutor|id=Erlang}}</ref> This makes them critical for optimizing the performance of telecommunications networks, where messages must often compete for limited resources (such as bandwidth).<ref name="CTCN">S. P. Meyn, 2007. [http://www.meyn.ece.ufl.edu/archive/spm_files/CTCN/MonographTocBib.pdf Control Techniques for Complex Networks] {{webarchive|url=https://web.archive.org/web/20150513155013/http://www.meyn.ece.ufl.edu/archive/spm_files/CTCN/MonographTocBib.pdf |date=2015-05-13}}, Cambridge University Press, 2007.</ref> Numerous queueing models use continuous-time Markov chains. For example, an [[M/M/1 queue]] is a CTMC on the non-negative integers where upward transitions from ''i'' to ''i'' + 1 occur at rate ''λ'' according to a [[Poisson process]] and describe job arrivals, while transitions from ''i'' to ''i'' – 1 (for ''i'' > 1) occur at rate ''μ'' (job service times are exponentially distributed) and describe completed services (departures) from the queue. ===Internet applications=== [[File:PageRank_with_Markov_Chain.png|right|thumb|A state diagram that represents the PageRank algorithm with a transitional probability of M, or <math>\frac{\alpha}{k_i} + \frac{1-\alpha}{N}</math>.]] The [[PageRank]] of a webpage as used by [[Google]] is defined by a Markov chain.<ref>{{US patent|6285999}}</ref><ref name="BrijP.2016">{{cite book|url=https://books.google.com/books?id=Ctk6DAAAQBAJ&pg=PA448|title=Handbook of Research on Modern Cryptographic Solutions for Computer and Cyber Security|author1=Gupta, Brij|author2=Agrawal, Dharma P.|author3=Yamaguchi, Shingo|date=16 May 2016|publisher=IGI Global|isbn=978-1-5225-0106-0|pages=448–}}</ref><ref name="LangvilleMeyer2006">{{cite journal|last1=Langville|first1=Amy N.|last2=Meyer|first2=Carl D.|year=2006|title=A Reordering for the PageRank Problem|url=http://meyer.math.ncsu.edu/Meyer/PS_Files/ReorderingPageRank.pdf |journal=SIAM Journal on Scientific Computing|volume=27|issue=6|pages=2112–2113|citeseerx=10.1.1.58.8652|doi=10.1137/040607551 |bibcode=2006SJSC...27.2112L }}</ref> It is the probability to be at page <math>i</math> in the stationary distribution on the following Markov chain on all (known) webpages. If <math>N</math> is the number of known webpages, and a page <math>i</math> has <math>k_i</math> links to it then it has transition probability <math>\frac{\alpha}{k_i} + \frac{1-\alpha}{N}</math> for all pages that are linked to and <math>\frac{1-\alpha}{N}</math> for all pages that are not linked to. The parameter <math>\alpha</math> is taken to be about 0.15.<ref name="pagerank">{{cite tech report |author1= Page, Lawrence |author2=Brin, Sergey |author3=Motwani, Rajeev |author4=Winograd, Terry |title= The PageRank Citation Ranking: Bringing Order to the Web |year= 1999 |citeseerx=10.1.1.31.1768}}</ref> Markov models have also been used to analyze web navigation behavior of users. A user's web link transition on a particular website can be modeled using first- or second-order Markov models and can be used to make predictions regarding future navigation and to personalize the web page for an individual user.{{cn|date=January 2025}} ===Statistics=== Markov chain methods have also become very important for generating sequences of random numbers to accurately reflect very complicated desired probability distributions, via a process called [[Markov chain Monte Carlo]] (MCMC). In recent years this has revolutionized the practicability of [[Bayesian inference]] methods, allowing a wide range of [[posterior distribution]]s to be simulated and their parameters found numerically.{{cn|date=June 2024}} ===Conflict and combat=== In 1971 a [[Naval Postgraduate School]] Master's thesis proposed to model a variety of combat between adversaries as a Markov chain "with states reflecting the control, maneuver, target acquisition, and target destruction actions of a weapons system" and discussed the parallels between the resulting Markov chain and [[Lanchester's laws]].<ref name="dtic1">{{cite news |url=https://apps.dtic.mil/sti/citations/AD0736113 |title=A Finite Markov Chain Model of the Combat Process |work=Naval Postgraduate School |date=September 1971 |last1=Reese |first1=Thomas Fred }}</ref> In 1975 Duncan and Siverson remarked that Markov chains could be used to model conflict between state actors, and thought that their analysis would help understand "the behavior of social and political organizations in situations of conflict."<ref name="duncan75">{{cite journal |doi=10.2307/2600315|jstor=2600315 |title=Markov Chain Models for Conflict Analysis: Results from Sino-Indian Relations, 1959-1964 |last1=Duncan |first1=George T. |last2=Siverson |first2=Randolph M. |journal=International Studies Quarterly |date=1975 |volume=19 |issue=3 |pages=344–374 }}</ref> ===Economics and finance=== Markov chains are used in finance and economics to model a variety of different phenomena, including the distribution of income, the size distribution of firms, asset prices and market crashes. [[D. G. Champernowne]] built a Markov chain model of the distribution of income in 1953.<ref>{{cite journal| title=A model of income distribution | last=Champernowne | first=D | journal=The Economic Journal | volume=63 | year=1953 | issue=250 | pages=318–51 |doi=10.2307/2227127| jstor=2227127 }}</ref> [[Herbert A. Simon]] and co-author Charles Bonini used a Markov chain model to derive a stationary Yule distribution of firm sizes.<ref>{{cite journal | title=The size distribution of business firms | last=Simon | first=Herbert | author2=C Bonini | journal=Am. Econ. Rev. | year=1958 | volume=42 | pages=425–40}}</ref> [[Louis Bachelier]] was the first to observe that stock prices followed a random walk.<ref>{{cite journal | title=Théorie de la spéculation | last=Bachelier | first=Louis | journal=Annales Scientifiques de l'École Normale Supérieure | year=1900 | volume=3 | pages=21–86| doi=10.24033/asens.476 | hdl=2027/coo.31924001082803 | hdl-access=free }}</ref> The random walk was later seen as evidence in favor of the [[efficient-market hypothesis]] and random walk models were popular in the literature of the 1960s.<ref>e.g.{{cite journal | title=The behavior of stock market prices | last=Fama | first=E | journal=Journal of Business | year=1965 | volume=38}}</ref> Regime-switching models of business cycles were popularized by [[James D. Hamilton]] (1989), who used a Markov chain to model switches between periods of high and low GDP growth (or, alternatively, economic expansions and recessions).<ref>{{cite journal|title=A new approach to the economic analysis of nonstationary time series and the business cycle |last=Hamilton |first=James |journal=Econometrica |year=1989 |volume=57 |pages=357–84|doi=10.2307/1912559|jstor=1912559|issue=2 |citeseerx=10.1.1.397.3582}}</ref> A more recent example is the [[Markov switching multifractal]] model of [[Laurent E. Calvet]] and Adlai J. Fisher, which builds upon the convenience of earlier regime-switching models.<ref>{{cite journal |title= Forecasting Multifractal Volatility|first1=Laurent E. |last1=Calvet |first2= Adlai J. |last2=Fisher |year=2001 |journal=[[Journal of Econometrics]] |volume=105 |issue=1 |pages=27–58 |doi=10.1016/S0304-4076(01)00069-0 |url=http://archive.nyu.edu/handle/2451/26894 }}</ref><ref>{{cite journal|title=How to Forecast long-run volatility: regime-switching and the estimation of multifractal processes |last=Calvet |first=Laurent |author2=Adlai Fisher |journal=Journal of Financial Econometrics |year=2004 |volume=2 |pages=49–83|doi=10.1093/jjfinec/nbh003 |citeseerx=10.1.1.536.8334 }}</ref> It uses an arbitrarily large Markov chain to drive the level of volatility of asset returns. Dynamic macroeconomics makes heavy use of Markov chains. An example is using Markov chains to exogenously model prices of equity (stock) in a [[general equilibrium]] setting.<ref>{{cite web |last1=Brennan |first1=Michael |first2=Yihong |last2=Xiab |title=Stock Price Volatility and the Equity Premium |website=Department of Finance, the Anderson School of Management, UCLA |url=http://bbs.cenet.org.cn/uploadImages/200352118122167693.pdf |archive-url=https://web.archive.org/web/20081228200849/http://bbs.cenet.org.cn/uploadImages/200352118122167693.pdf |url-status=dead |archive-date=2008-12-28}}</ref> [[Credit rating agency|Credit rating agencies]] produce annual tables of the transition probabilities for bonds of different credit ratings.<ref>{{Cite web|website=Columbia University|url=http://www.columbia.edu/~ww2040/4106S11/MC_BondRating.pdf|archive-url=https://web.archive.org/web/20160324112501/http://www.columbia.edu/~ww2040/4106S11/MC_BondRating.pdf|url-status=dead |title=A Markov Chain Example in Credit Risk Modelling |archive-date=March 24, 2016}}</ref> ===Social sciences=== Markov chains are generally used in describing [[path-dependent]] arguments, where current structural configurations condition future outcomes. An example is the reformulation of the idea, originally due to [[Karl Marx]]'s {{lang|de|[[Das Kapital]]}}, tying [[economic development]] to the rise of [[capitalism]]. In current research, it is common to use a Markov chain <!-- this is actually a Markov perefect equilibria, not simply a Markov chain, I'll try to remember get back to this ~~~~ --> to model how once a country reaches a specific level of economic development, the configuration of structural factors, such as size of the [[middle class]], the ratio of urban to rural residence, the rate of [[political]] mobilization, etc., will generate a higher probability of transitioning from [[authoritarian]] to [[democratic regime]].<ref>{{cite journal |last= Acemoglu |first= Daron |author2=Georgy Egorov |author3=Konstantin Sonin |title= Political model of social evolution |journal= Proceedings of the National Academy of Sciences |year= 2011 |volume= 108 |issue= Suppl 4 |pages= 21292–21296 |doi= 10.1073/pnas.1019454108 |pmid= 22198760 |pmc= 3271566 |citeseerx= 10.1.1.225.6090 |bibcode= 2011PNAS..10821292A|doi-access= free }}</ref> ===Music=== Markov chains are employed in [[algorithmic composition|algorithmic music composition]], particularly in [[software]] such as [[Csound]], [[Max (software)|Max]], and [[SuperCollider]]. In a first-order chain, the states of the system become note or pitch values, and a [[probability vector]] for each note is constructed, completing a transition probability matrix (see below). An algorithm is constructed to produce output note values based on the transition matrix weightings, which could be [[MIDI]] note values, frequency ([[Hertz|Hz]]), or any other desirable metric.<ref>{{cite journal |title=Making Music with Algorithms: A Case-Study System |author1=K McAlpine |author2=E Miranda |author3=S Hoggar |journal=Computer Music Journal |issue=2 |year=1999 |volume=23 |doi=10.1162/014892699559733 |pages=19–30 }}</ref> {| class="wikitable" style="float: left" |+ 1st-order matrix ! Note !! A !! C{{music|sharp}} !! E{{music|flat}} |- ! A | 0.1 || 0.6 || 0.3 |- ! C{{music|sharp}} | 0.25 || 0.05 || 0.7 |- ! E{{music|flat}} | 0.7 || 0.3 || 0 |} {| class="wikitable" style="float: left; margin-left: 1em" |+ 2nd-order matrix ! Notes !! A !! D !! G |- ! AA | 0.18 || 0.6 || 0.22 |- ! AD | 0.5 || 0.5 || 0 |- ! AG | 0.15 || 0.75 || 0.1 |- ! DD | 0 || 0 || 1 |- ! DA | 0.25 || 0 || 0.75 |- ! DG | 0.9 || 0.1 || 0 |- ! GG | 0.4 || 0.4 || 0.2 |- ! GA | 0.5 || 0.25 || 0.25 |- ! GD | 1 || 0 || 0 |} {{Clear}} A second-order Markov chain can be introduced by considering the current state ''and'' also the previous state, as indicated in the second table. Higher, ''n''th-order chains tend to "group" particular notes together, while 'breaking off' into other patterns and sequences occasionally. These higher-order chains tend to generate results with a sense of [[phrase (music)|phrasal]] structure, rather than the 'aimless wandering' produced by a first-order system.<ref name="Roads">{{cite book|editor=Curtis Roads |title=The Computer Music Tutorial |year=1996|publisher=MIT Press|isbn= 978-0-262-18158-7}}</ref> Markov chains can be used structurally, as in Xenakis's Analogique A and B.<ref>Xenakis, Iannis; Kanach, Sharon (1992) ''Formalized Music: Mathematics and Thought in Composition'', Pendragon Press. {{ISBN|1576470792}}</ref> Markov chains are also used in systems which use a Markov model to react interactively to music input.<ref>{{Cite web|url=http://www.csl.sony.fr/~pachet/|archive-url=https://web.archive.org/web/20120713235933/http://www.csl.sony.fr/~pachet/|url-status=dead |title=Continuator|archive-date=July 13, 2012}}</ref> Usually musical systems need to enforce specific control constraints on the finite-length sequences they generate, but control constraints are not compatible with Markov models, since they induce long-range dependencies that violate the Markov hypothesis of limited memory. In order to overcome this limitation, a new approach has been proposed.<ref>Pachet, F.; Roy, P.; Barbieri, G. (2011) [http://www.csl.sony.fr/downloads/papers/2011/pachet-11b.pdf "Finite-Length Markov Processes with Constraints"] {{webarchive|url=https://web.archive.org/web/20120414183247/http://www.csl.sony.fr/downloads/papers/2011/pachet-11b.pdf |date=2012-04-14}}, ''Proceedings of the 22nd International Joint Conference on Artificial Intelligence'', IJCAI, pages 635–642, Barcelona, Spain, July 2011</ref> ===Games and sports=== Markov chains can be used to model many games of chance. The children's games [[Snakes and Ladders]] and "[[Hi Ho! Cherry-O]]", for example, are represented exactly by Markov chains. At each turn, the player starts in a given state (on a given square) and from there has fixed odds of moving to certain other states (squares).{{cn|date=January 2025}} Markov chain models have been used in advanced baseball analysis since 1960, although their use is still rare. Each half-inning of a baseball game fits the Markov chain state when the number of runners and outs are considered. During any at-bat, there are 24 possible combinations of number of outs and position of the runners. Mark Pankin shows that Markov chain models can be used to evaluate runs created for both individual players as well as a team.<ref>{{cite web |last=Pankin |first=Mark D. |title=MARKOV CHAIN MODELS: THEORETICAL BACKGROUND |url=http://www.pankin.com/markov/theory.htm |access-date=2007-11-26 |url-status=usurped |archive-url=https://web.archive.org/web/20071209122054/http://www.pankin.com/markov/theory.htm |archive-date=2007-12-09 }}</ref> He also discusses various kinds of strategies and play conditions: how Markov chain models have been used to analyze statistics for game situations such as [[bunt (baseball)|bunting]] and [[base stealing]] and differences when playing on grass vs. [[AstroTurf]].<ref>{{cite web |last=Pankin |first=Mark D. |title=BASEBALL AS A MARKOV CHAIN |url=http://www.pankin.com/markov/intro.htm |archive-url=https://web.archive.org/web/20010513164045/http://www.pankin.com/markov/intro.htm |url-status=usurped |archive-date=May 13, 2001 |access-date=2009-04-24 }}</ref> ===Markov text generators=== Markov processes can also be used to [[natural language generation|generate superficially real-looking text]] given a sample document. Markov processes are used in a variety of recreational "[[parody generator]]" software (see [[dissociated press]], Jeff Harrison,<ref>{{Cite web|url=http://www.fieralingue.it/modules.php?name=Content&pa=list_pages_categories&cid=111|archive-url=https://web.archive.org/web/20101206043430/http://www.fieralingue.it/modules.php?name=Content&pa=list_pages_categories&cid=111|url-status=dead |title=Poet's Corner – Fieralingue|archive-date=December 6, 2010}}</ref> [[Mark V. Shaney]],<ref name="Travesty">{{cite journal |last1= Kenner |first1= Hugh |last2= O'Rourke |first2= Joseph |author-link2= Joseph O'Rourke (professor) |title= A Travesty Generator for Micros |date= November 1984 |journal= BYTE |volume= 9 |issue= 12 |pages= 129–131, 449–469 }} </ref><ref name="Hartman">{{cite book|title=Virtual Muse: Experiments in Computer Poetry|last=Hartman|first=Charles|publisher=Wesleyan University Press|year=1996|isbn=978-0-8195-2239-9|place=Hanover, NH|url-access=registration|url=https://archive.org/details/virtualmuseexper00hart}}</ref> and Academias Neutronium). Several open-source text generation libraries using Markov chains exist.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)