Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Margin of error
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Statistic expressing the amount of random sampling error in a survey's results}} {{about|the statistical precision of estimates from sample surveys|observational errors|Observational error|safety margins in engineering|Factor of safety|tolerance in engineering|Engineering tolerance|the eponymous movie|Margin for error (film)}} {{More footnotes needed|date=November 2021}} [[File:Margin-of-error-95.svg|thumb|upright=1.4|[[Probability density function|Probability densities]] of polls of different sizes, each color-coded to its 95% [[confidence interval]] (below), margin of error (left), and sample size (right). Each interval reflects the range within which one may have 95% confidence that the ''true'' percentage may be found, given a reported percentage of 50%. The ''margin of error'' is half the confidence interval (also, the ''radius'' of the interval). The larger the sample, the smaller the margin of error. Also, the further from 50% the reported percentage, the smaller the margin of error.]] The '''margin of error''' is a statistic expressing the amount of random [[sampling error]] in the results of a [[Statistical survey|survey]]. The larger the margin of error, the less confidence one should have that a poll result would reflect the result of a simultaneous census of the entire [[Statistical population|population]]. The margin of error will be positive whenever a population is incompletely sampled and the outcome measure has positive [[variance]], which is to say, whenever the measure ''varies''. The term ''margin of error'' is often used in non-survey contexts to indicate [[observational error]] in reporting measured quantities. == Concept == Consider a simple ''yes/no'' poll <math>P</math> as a sample of <math>n</math> respondents drawn from a population <math>N \text{, }(n \ll N)</math> reporting the percentage <math>p</math> of ''yes'' responses. We would like to know how close <math>p</math> is to the true result of a survey of the entire population <math>N</math>, without having to conduct one. If, hypothetically, we were to conduct a poll <math>P</math> over subsequent samples of <math>n</math> respondents (newly drawn from <math>N</math>), we would expect those subsequent results <math>p_1,p_2,\ldots</math> to be normally distributed about <math>\overline{p}</math>, the true but unknown percentage of the population. The ''margin of error'' describes the distance within which a specified percentage of these results is expected to vary from <math>\overline{p}</math>. Going by the [[Central limit theorem]], the margin of error helps to explain how the distribution of sample means (or percentage of yes, in this case) will approximate a normal distribution as sample size increases. If this applies, it would speak about the sampling being unbiased, but not about the inherent distribution of the data.<ref>{{Cite web |last=Siegfried |first=Tom |date=2014-07-03 |title=Scientists' grasp of confidence intervals doesn't inspire confidence {{!}} Science News |url=https://www.sciencenews.org/blog/context/scientists-grasp-confidence-intervals-doesnt-inspire-confidence |archive-url= |access-date=2024-08-06 |website=Science News |language=en-US}}</ref> According to the [[68β95β99.7 rule|68-95-99.7 rule]], we would expect that 95% of the results <math>p_1,p_2,\ldots</math> will fall within ''about'' two [[standard deviation]]s (<math>\plusmn2\sigma_{P}</math>) either side of the true mean <math>\overline{p}</math>. This interval is called the [[confidence interval]], and the ''radius'' (half the interval) is called the ''margin of error'', corresponding to a 95% ''confidence level''. Generally, at a confidence level <math>\gamma</math>, a sample sized <math>n</math> of a population having expected standard deviation <math>\sigma</math> has a margin of error :<math>MOE_\gamma = z_\gamma \times \sqrt{\frac{\sigma^2}{n}}</math> where <math>z_\gamma </math> denotes the ''quantile'' (also, commonly, a ''[[z-score]]''), and <math>\sqrt{\frac{\sigma^2}{n}}</math> is the [[standard error]]. == Standard deviation and standard error == We would expect the average of normally distributed values <math>p_1,p_2,\ldots</math> to have a standard deviation which somehow varies with <math>n</math>. The smaller <math>n</math>, the wider the margin. This is called the standard error <math>\sigma_\overline{p}</math>. For the single result from our survey, we ''assume'' that <math>p = \overline{p}</math>, and that ''all'' subsequent results <math>p_1,p_2,\ldots</math> together would have a variance <math>\sigma_{P}^2=P(1-P)</math>. : <math> \text{Standard error} = \sigma_\overline{p} \approx \sqrt{\frac{\sigma_{P}^2}{n}} \approx \sqrt{\frac{p(1-p)}{n}}</math> Note that <math>p(1-p)</math> corresponds to the variance of a [[Bernoulli distribution]]. == Maximum margin of error at different confidence levels == [[File:Empirical Rule.PNG|thumb|350px]]For a confidence ''level'' <math>\gamma</math>, there is a corresponding confidence ''interval'' about the mean <math>\mu\plusmn z_\gamma\sigma</math>, that is, the interval <math>[\mu-z_\gamma\sigma,\mu+z_\gamma\sigma]</math> within which values of <math>P</math> should fall with probability <math>\gamma</math>. Precise values of <math>z_\gamma</math> are given by the [[Normal distribution#Quantile function|quantile function of the normal distribution]] (which the 68β95β99.7 rule approximates). Note that <math>z_\gamma</math> is undefined for <math>|\gamma| \ge 1</math>, that is, <math>z_{1.00}</math> is undefined, as is <math>z_{1.10}</math>. {| class="wikitable" style="text-align:left;margin-left:24pt;border:none;" |- ! <math>\gamma</math> ! <math>z_{\gamma}</math> | rowspan="8" style="border:none;" | ! <math>\gamma</math> ! <math>z_{\gamma}</math> |- | 0.84 | {{val|0.994457883210}} | 0.9995 | {{val|3.290526731492}} |- | 0.95 | {{val|1.644853626951}} | 0.99995 | {{val|3.890591886413}} |- | 0.975 | [[1.96|1.959963984540]] | 0.999995 | {{val|4.417173413469}} |- | 0.99 | {{val|2.326347874041}} | 0.9999995 | {{val|4.891638475699}} |- | 0.995 | {{val|2.575829303549}} | 0.99999995 | {{val|5.326723886384}} |- | 0.9975 | {{val|2.807033768344}} | 0.999999995 | {{val|5.730728868236}} |- | 0.9985 | {{val|2.967737925342}} | 0.9999999995 | {{val|6.109410204869}} |} [[File:Margin of error vs sample size and confidence level.svg|thumb|250px|Log-log graphs of <math>MOE_{\gamma}(0.5)</math> vs sample size ''n'' and confidence level ''Ξ³''. The arrows show that the maximum margin error for a sample size of 1000 is Β±3.1% at 95% confidence level, and Β±4.1% at 99%.<br/>The inset parabola <math>\sigma_p^2 = p-p^2</math> illustrates the relationship between <math>\sigma_p^2</math> at <math>p=0.71</math> and <math>\sigma^2_{max}</math> at <math>p=0.5</math>. In the example, ''MOE''<sub>95</sub>(0.71) β {{nowrap|0.9 Γ Β±3.1%}} β Β±2.8%.]] Since <math>\max \sigma_P^2 = \max P(1-P) = 0.25</math> at <math>p = 0.5</math>, we can arbitrarily set <math>p=\overline{p} = 0.5</math>, calculate <math>\sigma_{P}</math>, <math>\sigma_\overline{p}</math>, and <math>z_\gamma\sigma_\overline{p}</math> to obtain the ''maximum'' margin of error for <math>P</math> at a given confidence level <math>\gamma</math> and sample size <math>n</math>, even before having actual results. With <math>p=0.5,n=1013</math> :<math>MOE_{95}(0.5) = z_{0.95}\sigma_\overline{p} \approx z_{0.95}\sqrt{\frac{\sigma_{P}^2}{n}} = 1.96\sqrt{\frac{.25}{n}} = 0.98/\sqrt{n}=\plusmn3.1%</math> :<math>MOE_{99}(0.5) = z_{0.99}\sigma_\overline{p} \approx z_{0.99}\sqrt{\frac{\sigma_{P}^2}{n}} = 2.58\sqrt{\frac{.25}{n}} = 1.29/\sqrt{n}=\plusmn4.1%</math> Also, usefully, for any reported <math>MOE_{95}</math> :<math>MOE_{99} = \frac{z_{0.99}}{z_{0.95}}MOE_{95} \approx 1.3 \times MOE_{95}</math> == Specific margins of error == If a poll has multiple percentage results (for example, a poll measuring a single multiple-choice preference), the result closest to 50% will have the highest margin of error. Typically, it is this number that is reported as the margin of error for the entire poll. Imagine poll <math>P</math> reports <math>p_{a},p_{b},p_{c}</math> as <math>71%, 27%, 2%, n=1013</math> :<math>MOE_{95}(P_{a}) = z_{0.95}\sigma_\overline{p_{a}} \approx 1.96\sqrt{\frac{p_{a}(1-p_{a})}{n}} = 0.89/\sqrt{n}=\plusmn2.8%</math> (as in the figure above) :<math>MOE_{95}(P_{b}) = z_{0.95}\sigma_\overline{p_{b}} \approx 1.96\sqrt{\frac{p_{b}(1-p_{b})}{n}} = 0.87/\sqrt{n}=\plusmn2.7%</math> :<math>MOE_{95}(P_{c}) = z_{0.95}\sigma_\overline{p_{c}} \approx 1.96\sqrt{\frac{p_{c}(1-p_{c})}{n}} = 0.27/\sqrt{n}=\plusmn0.8%</math> As a given percentage approaches the extremes of 0% or 100%, its margin of error approaches Β±0%. == Comparing percentages == Imagine multiple-choice poll <math>P</math> reports <math>p_{a},p_{b},p_{c}</math> as <math>46%, 42%, 12%, n=1013</math>. As described above, the margin of error reported for the poll would typically be <math>MOE_{95}(P_{a})</math>, as <math>p_{a}</math> is closest to 50%. The popular notion of ''statistical tie'' or ''statistical dead heat,'' however, concerns itself not with the accuracy of the individual results, but with that of the ''ranking'' of the results. Which is in first? If, hypothetically, we were to conduct a poll <math>P</math> over subsequent samples of <math>n</math> respondents (newly drawn from <math>N</math>), and report the result <math>p_{w} = p_{a} - p_{b}</math>, we could use the ''standard error of difference'' to understand how <math> p_{w_{1}},p_{w_{2}},p_{w_{3}},\ldots</math> is expected to fall about <math> \overline{p_w}</math>. For this, we need to apply the ''sum of variances'' to obtain a new variance, <math> \sigma_{P_{w}}^2 </math>, :<math> \sigma_{P_{w}}^2=\sigma_{P_{a}- P_{b}}^2 = \sigma_{P_{a}}^2 + \sigma_{P_{b}}^2-2\sigma_{P_{a},P_{b}} = p_{a}(1-p_{a}) + p_{b}(1-p_{b}) + 2p_{a}p_{b} </math> where <math>\sigma_{P_{a},P_{b}} = -P_{a}P_{b}</math> is the [[covariance]] of <math>P_{a}</math> and <math>P_{b}</math>. Thus (after simplifying), :<math> \text{Standard error of difference} = \sigma_{\overline{w}} \approx \sqrt{\frac{\sigma_{P_{w}}^2}{n}} = \sqrt{\frac{p_{a}+p_{b}-(p_{a}-p_{b})^2}{n}} = 0.029, P_{w}=P_{a}-P_{b}</math> :<math> MOE_{95}(P_{a}) = z_{0.95}\sigma_{\overline{p_{a}}} \approx \plusmn{3.1%}</math> :<math> MOE_{95}(P_{w}) = z_{0.95}\sigma_{\overline{w}} \approx \plusmn{5.8%}</math> Note that this assumes that <math>P_{c}</math> is close to constant, that is, respondents choosing either A or B would almost never choose C (making <math>P_{a}</math> and <math>P_{b}</math> close to ''perfectly negatively correlated''). With three or more choices in closer contention, choosing a correct formula for <math> \sigma_{P_{w}}^2</math> becomes more complicated. == Effect of finite population size == The formulae above for the margin of error assume that there is an infinitely large population and thus do not depend on the size of population <math>N</math>, but only on the sample size <math>n</math>. According to [[Sampling (statistics)|sampling theory]], this assumption is reasonable when the [[sampling fraction]] is small. The margin of error for a particular sampling method is essentially the same regardless of whether the population of interest is the size of a school, city, state, or country, as long as the sampling ''fraction'' is small. In cases where the sampling fraction is larger (in practice, greater than 5%), analysts might adjust the margin of error using a [[finite population correction]] to account for the added precision gained by sampling a much larger percentage of the population. FPC can be calculated using the formula<ref>{{cite journal|last=Isserlis|first=L.|year=1918|title=On the value of a mean as calculated from a sample|url=https://zenodo.org/record/1449486|journal=Journal of the Royal Statistical Society|publisher=Blackwell Publishing|volume=81|issue=1|pages=75β81|doi=10.2307/2340569|jstor=2340569}} (Equation 1)</ref> :<math>\operatorname{FPC} = \sqrt{\frac{N-n}{N-1}}</math> ...and so, if poll <math>P</math> were conducted over 24% of, say, an electorate of 300,000 voters, :<math>MOE_{95}(0.5) = z_{0.95}\sigma_\overline{p} \approx \frac{0.98}{\sqrt{72,000}}=\plusmn0.4%</math> :<math>MOE_{95_{FPC}}(0.5) = z_{0.95}\sigma_\overline{p}\sqrt{\frac{N-n}{N-1}}\approx \frac{0.98}{\sqrt{72,000}}\sqrt{\frac{300,000-72,000}{300,000-1}}=\plusmn0.3%</math> Intuitively, for appropriately large <math>N</math>, :<math>\lim_{n \to 0} \sqrt{\frac{N-n}{N-1}}\approx 1</math> :<math>\lim_{n \to N} \sqrt{\frac{N-n}{N-1}} = 0</math> In the former case, <math>n</math> is so small as to require no correction. In the latter case, the poll effectively becomes a census and sampling error becomes moot. == See also == * [[Engineering tolerance]] * [[Key relevance]] * [[Measurement uncertainty]] * [[Random error]] == References == {{Reflist}} == Sources == * Sudman, Seymour and Bradburn, Norman (1982). ''Asking Questions: A Practical Guide to Questionnaire Design''. San Francisco: Jossey Bass. {{ISBN|0-87589-546-8}} * {{cite book |title=Introductory Statistics| edition=5th| author1=Wonnacott, T.H. |author2=R.J. Wonnacott| publisher=Wiley |date=1990 |isbn=0-471-61518-8 }} == External links == {{Wikibooks}} * {{springer|title=Errors, theory of|id=p/e036240}} * {{mathworld | urlname = MarginofError | title = Margin of Error}} [[Category:Error]] [[Category:Measurement]] [[Category:Sampling (statistics)]] [[Category:Statistical deviation and dispersion]] [[Category:Statistical intervals]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:About
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:ISBN
(
edit
)
Template:Mathworld
(
edit
)
Template:More footnotes needed
(
edit
)
Template:Nowrap
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Sister project
(
edit
)
Template:Springer
(
edit
)
Template:Val
(
edit
)
Template:Wikibooks
(
edit
)