Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Central limit theorem
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Applications and examples== A simple example of the central limit theorem is rolling many identical, unbiased dice. The distribution of the sum (or average) of the rolled numbers will be well approximated by a normal distribution. Since real-world quantities are often the balanced sum of many unobserved random events, the central limit theorem also provides a partial explanation for the prevalence of the normal probability distribution. It also justifies the approximation of large-sample [[statistic]]s to the normal distribution in controlled experiments. {{multiple image |total_width=830 |align=center |image1=Dice sum central limit theorem.svg |caption1=Comparison of probability density functions {{math|''p''(''k'')}} for the sum of {{mvar|n}} fair 6-sided dice to show their convergence to a normal distribution with increasing {{mvar|n}}, in accordance to the central limit theorem. In the bottom-right graph, smoothed profiles of the previous graphs are rescaled, superimposed and compared with a normal distribution (black curve). |image2=Empirical CLT - Figure - 040711.jpg |caption2=This figure demonstrates the central limit theorem. The sample means are generated using a random number generator, which draws numbers between 0 and 100 from a uniform probability distribution. It illustrates that increasing sample sizes result in the 500 measured sample means being more closely distributed about the population mean (50 in this case). It also compares the observed distributions with the distributions that would be expected for a normalized Gaussian distribution, and shows the [[Pearson's chi-squared test|chi-squared]] values that quantify the goodness of the fit (the fit is good if the reduced [[Pearson's chi-squared test|chi-squared]] value is less than or approximately equal to one). The input into the normalized Gaussian function is the mean of sample means (~50) and the mean sample standard deviation divided by the square root of the sample size (~28.87/{{math|{{sqrt|''n''}}}}), which is called the standard deviation of the mean (since it refers to the spread of sample means). }} [[File:Mean-of-the-outcomes-of-rolling-a-fair-coin-n-times.svg|center|thumb|820px|Another simulation using the binomial distribution. Random 0s and 1s were generated, and then their means calculated for sample sizes ranging from 1 to 2048. Note that as the sample size increases the tails become thinner and the distribution becomes more concentrated around the mean.]] ===Regression=== [[Regression analysis]], and in particular [[ordinary least squares]], specifies that a [[dependent variable]] depends according to some function upon one or more [[independent variable]]s, with an additive [[Errors and residuals in statistics|error term]]. Various types of statistical inference on the regression assume that the error term is normally distributed. This assumption can be justified by assuming that the error term is actually the sum of many independent error terms; even if the individual error terms are not normally distributed, by the central limit theorem their sum can be well approximated by a normal distribution. ===Other illustrations=== {{Main|Illustration of the central limit theorem}} Given its importance to statistics, a number of papers and computer packages are available that demonstrate the convergence involved in the central limit theorem.<ref name="Marasinghe">{{cite conference |last1=Marasinghe |first1=M. |last2=Meeker |first2=W. |last3=Cook |first3=D. |last4=Shin |first4=T. S. |date=Aug 1994 |title=Using graphics and simulation to teach statistical concepts |conference=Annual meeting of the American Statistician Association, Toronto, Canada}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)