Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Sampling distribution
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Probability distribution of the possible sample outcomes}} {{Use dmy dates|date=September 2015}} In [[statistics]], a '''sampling distribution''' or '''finite-sample distribution''' is the [[probability distribution]] of a given [[random sample|random-sample]]-based [[statistic]]. For an arbitrarily large number of samples where each sample, involving multiple observations (data points), is separately used to compute one value of a statistic (for example, the [[sample mean]] or sample [[variance]]) per sample, the sampling distribution is the probability distribution of the values that the statistic takes on. In many contexts, only one sample (i.e., a set of observations) is observed, but the sampling distribution can be found theoretically. Sampling distributions are important in statistics because they provide a major simplification en route to [[statistical inference]]. More specifically, they allow analytical considerations to be based on the probability distribution of a statistic, rather than on the [[joint probability distribution]] of all the individual sample values. ==Introduction== The '''sampling distribution''' of a statistic is the [[probability distribution|distribution]] of that statistic, considered as a [[random variable]], when derived from a [[random sample]] of size <math>n</math>. It may be considered as the distribution of the statistic for ''all possible samples from the same population'' of a given sample size. The sampling distribution depends on the underlying [[probability distribution|distribution]] of the population, the statistic being considered, the sampling procedure employed, and the sample size used. There is often considerable interest in whether the sampling distribution can be approximated by an [[asymptotic distribution]], which corresponds to the limiting case either as the number of random samples of finite size, taken from an infinite population and used to produce the distribution, tends to infinity, or when just one equally-infinite-size "sample" is taken of that same population. For example, consider a [[normal distribution|normal]] population with mean <math>\mu</math> and variance <math>\sigma^2</math>. Assume we repeatedly take samples of a given size from this population and calculate the [[arithmetic mean]] <math> \bar x</math> for each sample – this statistic is called the [[sample mean]]. The distribution of these means, or averages, is called the "sampling distribution of the sample mean". This distribution is normal <math> \mathcal{N}(\mu, \sigma^2/n)</math> (''n'' is the sample size) since the underlying population is normal, although sampling distributions may also often be close to normal even when the population distribution is not (see [[central limit theorem]]). An alternative to the sample mean is the sample [[median]]. When calculated from the same population, it has a different sampling distribution to that of the mean and is generally not normal (but it may be close for large sample sizes). The mean of a sample from a population having a normal distribution is an example of a simple statistic taken from one of the simplest [[statistical population]]s. For other statistics and other populations the formulas are more complicated, and often they do not exist in [[Closed-form expression|closed-form]]. In such cases the sampling distributions may be approximated through [[Monte-Carlo simulation]]s,<ref>{{cite book|last=Mooney|first=Christopher Z.|title=Monte Carlo simulation|year=1999| publisher=Sage|location=Thousand Oaks, Calif.|isbn=9780803959439|url = https://books.google.com/books?id=xQRgh4z_5acC|page=2}}</ref> [[Bootstrapping (statistics)|bootstrap]] methods, or [[asymptotic distribution]] theory. ==Standard error== The [[standard deviation]] of the sampling distribution of a [[statistic]] is referred to as the [[standard error (statistics)|standard error]] of the statistic. For the case where the statistic is the sample mean, and samples are uncorrelated, the standard error is: <math display="block">\sigma_{\bar x} = \frac{\sigma}{\sqrt{n}}</math> where <math>\sigma</math> is the standard deviation of the population distribution of that quantity and <math>n</math> is the sample size (number of items in the sample). An important implication of this formula is that the sample size must be quadrupled (multiplied by 4) to achieve half (1/2) the measurement error. When designing statistical studies where cost is a factor, this may have a role in understanding cost–benefit tradeoffs. For the case where the statistic is the sample total, and samples are uncorrelated, the standard error is: <math display="block">\sigma_{\Sigma x} = \sigma\sqrt{n}</math> where, again, <math>\sigma</math> is the standard deviation of the population distribution of that quantity and <math>n</math> is the sample size (number of items in the sample). ==Examples== [[File:Sampling distribution.png|thumb|Sampling distribution of the sample mean of normally distributed random numbers. With increasing sample size, the sampling distribution becomes more and more centralized.]] {| class="wikitable" |- ! Population || Statistic || Sampling distribution |- | [[Normal distribution|Normal]]: <math>\mathcal{N}(\mu, \sigma^2)</math> | Sample mean <math>\bar X</math> from samples of size ''n'' | <math>\bar X \sim \mathcal{N}\Big(\mu,\, \frac{\sigma^2}{n} \Big)</math>. <small>If the standard deviation <math>\sigma</math> is not known, one can consider <math>T = \left(\bar{X} - \mu\right) \frac{\sqrt{n}}{S} </math>, which follows the [[Student's t-distribution]] with <math>\nu = n - 1</math> degrees of freedom. Here <math>S^2</math> is the sample variance, and <math>T</math> is a [[pivotal quantity]], whose distribution does not depend on <math>\sigma</math>. </small> |- | [[Bernoulli distribution|Bernoulli]]: <math>\operatorname{Bernoulli}(p)</math> | Sample proportion of "successful trials" <math>\bar X</math> | [[Binomial distribution|<math>n \bar X \sim \operatorname{Binomial}(n, p)</math>]] |- | Two independent normal populations:<br /> <math>\mathcal{N}(\mu_1, \sigma_1^2)</math> and <math>\mathcal{N}(\mu_2, \sigma_2^2)</math> | Difference between sample means, <math>\bar X_1 - \bar X_2</math> | <math>\bar X_1 - \bar X_2 \sim \mathcal{N}\! \left(\mu_1 - \mu_2,\, \frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2} \right)</math> |- | Any absolutely continuous distribution ''F'' with density ''f'' | [[Median]] <math>X_{(k)}</math> from a sample of size ''n'' = 2''k'' − 1, where sample is ordered <math>X_{(1)}</math> to <math>X_{(n)}</math> | <math>f_{X_{(k)}}(x) = \frac{(2k-1)!}{(k-1)!^2}f(x)\Big(F(x)(1-F(x))\Big)^{k-1}</math> |- | Any distribution with distribution function ''F'' | [[Maximum]] <math>M=\max\ X_k</math> from a random sample of size ''n'' | <math>F_M(x) = P(M\le x) = \prod P(X_k\le x)= \left(F(x)\right)^n</math> |} ==References== {{reflist}} * Merberg, A. and S.J. Miller (2008). [https://web.williams.edu/Mathematics/sjmiller/public_html/BrownClasses/162/Handouts/MedianThm04.pdf "The Sample Distribution of the Median"]. ''Course Notes for Math 162: Mathematical Statistics'', pgs 1–9. ==External links== *[http://demonstrations.wolfram.com/StatisticsAssociatedWithNormalSamples/ ''Mathematica'' demonstration showing the sampling distribution of various statistics (e.g. Σ''x''²) for a normal population] {{Statistics|inference}} {{Authority control}} [[Category:Statistical inference]] [[Category:Sampling (statistics)]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Authority control
(
edit
)
Template:Cite book
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Statistics
(
edit
)
Template:Use dmy dates
(
edit
)