Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Monte Carlo method
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Application == Monte Carlo methods are often used in [[physics|physical]] and [[mathematics|mathematical]] problems and are most useful when it is difficult or impossible to use other approaches. Monte Carlo methods are mainly used in three problem classes:<ref>{{cite journal|author-last1=Kroese |author-first1=D. P. |author-last2=Brereton |author-first2=T. |author-last3=Taimre |author-first3=T. |author-last4=Botev |author-first4=Z. I. |year=2014 |title=Why the Monte Carlo method is so important today |journal = WIREs Comput Stat |volume=6 |issue=6 |pages=386–392 |doi=10.1002/wics.1314 |s2cid=18521840 }}</ref> [[optimization]], [[numerical integration]], and generating draws from a [[probability distribution]]. In physics-related problems, Monte Carlo methods are useful for simulating systems with many [[coupling (physics)|coupled]] [[degrees of freedom]], such as fluids, disordered materials, strongly coupled solids, and cellular structures (see [[cellular Potts model]], [[interacting particle systems]], [[McKean–Vlasov process]]es, [[kinetic theory of gases|kinetic models of gases]]). Other examples include modeling phenomena with significant [[uncertainty]] in inputs such as the calculation of [[risk]] in business and, in mathematics, evaluation of multidimensional [[Integral|definite integral]]s with complicated [[boundary conditions]]. In application to systems engineering problems (space, [[oil exploration]], aircraft design, etc.), Monte Carlo–based predictions of failure, [[cost overrun]]s and schedule overruns are routinely better than human intuition or alternative "soft" methods.<ref>{{cite journal|author-last1=Hubbard |author-first1=Douglas |author-last2=Samuelson |author-first2=Douglas A. |date=October 2009 |title=Modeling Without Measurements |url=http://viewer.zmags.com/publication/357348e6#/357348e6/28 |journal=OR/MS Today |pages=28–33}}</ref> In principle, Monte Carlo methods can be used to solve any problem having a probabilistic interpretation. By the [[law of large numbers]], integrals described by the [[expected value]] of some random variable can be approximated by taking the [[Sample mean and sample covariance|empirical mean]] ({{a.k.a.}} the 'sample mean') of independent samples of the variable. When the [[probability distribution]] of the variable is parameterized, mathematicians often use a [[Markov chain Monte Carlo]] (MCMC) sampler.<ref>{{cite journal|title=Equation of State Calculations by Fast Computing Machines |journal=The Journal of Chemical Physics |date=June 1, 1953 |issn=0021-9606 |pages=1087–1092|volume=21|issue=6|doi=10.1063/1.1699114 |author-first1=Nicholas |author-last1=Metropolis |author-first2=Arianna W. |author-last2=Rosenbluth |author-first3=Marshall N. |author-last3=Rosenbluth |author-first4=Augusta H. |author-last4=Teller |author-first5=Edward |author-last5=Teller |bibcode=1953JChPh..21.1087M |osti=4390578 |s2cid=1046577 }}</ref><ref>{{cite journal|title=Monte Carlo sampling methods using Markov chains and their applications |journal=Biometrika |date=April 1, 1970 |issn=0006-3444 |pages=97–109 |volume=57 |issue=1 |doi=10.1093/biomet/57.1.97 |author-first=W. K. |author-last=Hastings |bibcode=1970Bimka..57...97H |s2cid=21204149 }}</ref><ref>{{cite journal|title=The Multiple-Try Method and Local Optimization in Metropolis Sampling |journal=Journal of the American Statistical Association |date=March 1, 2000 |issn=0162-1459 |pages=121–134 |volume=95 |issue=449 |doi=10.1080/01621459.2000.10473908 |author-first1=Jun S. |author-last1=Liu |author-first2=Faming |author-last2=Liang |author-first3=Wing Hung |author-last3=Wong |s2cid=123468109 }}</ref> The central idea is to design a judicious [[Markov chain]] model with a prescribed [[stationary probability distribution]]. That is, in the limit, the samples being generated by the MCMC method will be samples from the desired (target) distribution.<ref>{{cite journal |author-last1=Spall |author-first1=J. C. |year=2003 |title=Estimation via Markov Chain Monte Carlo |doi=10.1109/MCS.2003.1188770 |journal=IEEE Control Systems Magazine |volume=23 |issue=2 |pages=34–45 }}</ref><ref>{{cite journal |doi=10.1109/MCS.2018.2876959 |title=Stationarity and Convergence of the Metropolis-Hastings Algorithm: Insights into Theoretical Aspects |journal=IEEE Control Systems Magazine |volume=39 |pages=56–67 |year=2019 |author-last1=Hill |author-first1=Stacy D. |author-last2=Spall |author-first2=James C. |s2cid=58672766}}</ref> By the [[ergodic theorem]], the stationary distribution is approximated by the [[empirical measure]]s of the random states of the MCMC sampler. In other problems, the objective is generating draws from a sequence of probability distributions satisfying a nonlinear evolution equation. These flows of probability distributions can always be interpreted as the distributions of the random states of a [[Markov process]] whose transition probabilities depend on the distributions of the current random states (see [[McKean–Vlasov process]]es, [[particle filter|nonlinear filtering equation]]).<ref name="kol10">{{cite book|author-last=Kolokoltsov |author-first=Vassili |title=Nonlinear Markov processes |year=2010 |publisher=[[Cambridge University Press]] |pages=375}}</ref><ref name="dp13">{{cite book|author-last=Del Moral |author-first=Pierre |title=Mean field simulation for Monte Carlo integration |year=2013 |publisher=Chapman & Hall/[[CRC Press]] |quote=Monographs on Statistics & Applied Probability |url=http://www.crcpress.com/product/isbn/9781466504059 |pages=626}}</ref> In other instances, a flow of probability distributions with an increasing level of sampling complexity arise (path spaces models with an increasing time horizon, Boltzmann–Gibbs measures associated with decreasing temperature parameters, and many others). These models can also be seen as the evolution of the law of the random states of a nonlinear Markov chain.<ref name="dp13" /><ref>{{cite journal|title=Sequential Monte Carlo samplers |author-last1=Del Moral |author-first1=P. |author-last2=Doucet |author-first2=A. |author-last3=Jasra |author-first3=A. |year=2006 |doi=10.1111/j.1467-9868.2006.00553.x |volume=68 |issue=3 |journal=Journal of the Royal Statistical Society, Series B |pages=411–436 |arxiv=cond-mat/0212648 |s2cid=12074789 }}</ref> A natural way to simulate these sophisticated nonlinear Markov processes is to sample multiple copies of the process, replacing in the evolution equation the unknown distributions of the random states by the sampled [[empirical measure]]s. In contrast with traditional Monte Carlo and MCMC methodologies, these [[mean-field particle methods|mean-field particle]] techniques rely on sequential interacting samples. The terminology ''mean field'' reflects the fact that each of the ''samples'' ({{a.k.a.}} particles, individuals, walkers, agents, creatures, or phenotypes) interacts with the empirical measures of the process. When the size of the system tends to infinity, these random empirical measures converge to the deterministic distribution of the random states of the nonlinear Markov chain, so that the statistical interaction between particles vanishes.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)