Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Monte Carlo method
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Applied statistics=== The standards for Monte Carlo experiments in statistics were set by Sawilowsky.<ref>{{cite journal |author-last1=Cassey |author-last2=Smith |year=2014 |title=Simulating confidence for the Ellison-Glaeser Index |journal=Journal of Urban Economics |volume=81 |page=93 |doi=10.1016/j.jue.2014.02.005}}</ref> In applied statistics, Monte Carlo methods may be used for at least four purposes: # To compare competing statistics for small samples under realistic data conditions. Although [[type I error]] and power properties of statistics can be calculated for data drawn from classical theoretical distributions (''e.g.'', [[normal curve]], [[Cauchy distribution]]) for [[asymptotic]] conditions (''i. e'', infinite sample size and infinitesimally small treatment effect), real data often do not have such distributions.<ref>{{harvnb|Sawilowsky|Fahoome|2003}}</ref> # To provide implementations of [[Statistical hypothesis testing|hypothesis tests]] that are more efficient than exact tests such as [[permutation tests]] (which are often impossible to compute) while being more accurate than critical values for [[asymptotic distribution]]s. # To provide a random sample from the posterior distribution in [[Bayesian inference]]. This sample then approximates and summarizes all the essential features of the posterior. # To provide efficient random estimates of the Hessian matrix of the negative log-likelihood function that may be averaged to form an estimate of the [[Fisher information]] matrix.<ref>{{cite journal |doi=10.1198/106186005X78800 |title=Monte Carlo Computation of the Fisher Information Matrix in Nonstandard Settings |journal=Journal of Computational and Graphical Statistics |volume=14 |issue=4 |pages=889–909 |year=2005 |author-last1=Spall |author-first1=James C. |citeseerx=10.1.1.142.738 |s2cid=16090098}}</ref><ref>{{cite journal |doi=10.1016/j.csda.2009.09.018 |title=Efficient Monte Carlo computation of Fisher information matrix using prior information |journal=Computational Statistics & Data Analysis |volume=54 |issue=2 |pages=272–289 |year=2010 |author-last1=Das |author-first1=Sonjoy |author-last2=Spall |author-first2=James C. |author-last3=Ghanem |author-first3=Roger}}</ref> Monte Carlo methods are also a compromise between approximate randomization and permutation tests. An approximate [[randomization test]] is based on a specified subset of all permutations (which entails potentially enormous housekeeping of which permutations have been considered). The Monte Carlo approach is based on a specified number of randomly drawn permutations (exchanging a minor loss in precision if a permutation is drawn twice—or more frequently—for the efficiency of not having to track which permutations have already been selected). {{anchor|Monte Carlo tree search}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)