Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Confidence interval
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Examples of how naïve interpretation of confidence intervals can be problematic === ==== Confidence procedure for uniform location ==== [[File:Welch and Bayes intervals.svg|thumb|Ten examples of the 50% Welch and Bayesian intervals are shown in contrasting white and gray rows. The examples are sorted top-to-bottom by decreasing distance between <math>X_1</math> and <math>X_2</math>.]] Welch<ref>{{cite journal |last= Welch|first=B. L. |date= 1939|title= On Confidence Limits and Sufficiency, with Particular Reference to Parameters of Location|jstor= 2235987 |journal= The Annals of Mathematical Statistics|volume= 10|issue= 1|pages= 58–69| doi= 10.1214/aoms/1177732246|doi-access= free}}</ref> presented an example which clearly shows the difference between the theory of confidence intervals and other theories of interval estimation (including Fisher's [[Fiducial inference|fiducial]] intervals and objective [[Bayesian inference|Bayesian]] intervals). Robinson<ref>{{cite journal |last= Robinson|first=G. K. |date= 1975|title= Some Counterexamples to the Theory of Confidence Intervals|jstor= 2334498 |journal= Biometrika|volume= 62| issue= 1|pages= 155–161|doi=10.2307/2334498}}</ref> called this example "[p]ossibly the best known counterexample for Neyman's version of confidence interval theory." To Welch, it showed the superiority of confidence interval theory; to critics of the theory, it shows a deficiency. Here we present a simplified version. Suppose that <math>X_1,X_2</math> are independent observations from a [[Continuous uniform distribution|uniform]] <math>(\theta - 1/2, \theta + 1/2)</math> distribution. Then the optimal 50% confidence procedure for <math>\theta</math> is<ref>{{cite journal |last= Pratt|first=J. W. |date= 1961| title= Book Review: Testing Statistical Hypotheses. by E. L. Lehmann|jstor= 2282344 |journal= Journal of the American Statistical Association|volume= 56|issue= 293|pages= 163–167|doi= 10.1080/01621459.1961.10482103}}</ref> : <math>\bar{X} \pm \begin{cases} \dfrac{|X_1-X_2|}{2} & \text{if } |X_1-X_2| < 1/2 \\[8pt] \dfrac{1-|X_1-X_2|}{2} &\text{if } |X_1-X_2| \geq 1/2. \end{cases} </math> A fiducial or objective Bayesian argument can be used to derive the interval estimate : <math>\bar{X} \pm \frac{1-|X_1-X_2|}{4},</math> which is also a 50% confidence procedure. Welch showed that the first confidence procedure dominates the second, according to desiderata from confidence interval theory; for every <math>\theta_1\neq\theta</math>, the probability that the first procedure contains <math>\theta_1</math> is ''less than or equal to'' the probability that the second procedure contains <math>\theta_1</math>. The average width of the intervals from the first procedure is less than that of the second. Hence, the first procedure is preferred under classical confidence interval theory. However, when <math>|X_1-X_2| \geq 1/2</math>, intervals from the first procedure are ''guaranteed'' to contain the true value <math>\theta</math>: Therefore, the nominal 50% confidence coefficient is unrelated to the uncertainty we should have that a specific interval contains the true value. The second procedure does not have this property. Moreover, when the first procedure generates a very short interval, this indicates that <math>X_1,X_2</math> are very close together and hence only offer the information in a single data point. Yet the first interval will exclude almost all reasonable values of the parameter due to its short width. The second procedure does not have this property. The two counter-intuitive properties of the first procedure – 100% [[Coverage probability|coverage]] when <math>X_1,X_2</math> are far apart and almost 0% coverage when <math>X_1,X_2</math> are close together – balance out to yield 50% coverage on average. However, despite the first procedure being optimal, its intervals offer neither an assessment of the precision of the estimate nor an assessment of the uncertainty one should have that the interval contains the true value. This example is used to argue against naïve interpretations of confidence intervals. If a confidence procedure is asserted to have properties beyond that of the nominal coverage (such as relation to precision, or a relationship with Bayesian inference), those properties must be proved; they do not follow from the fact that a procedure is a confidence procedure. ==== Confidence procedure for ''ω''<sup>2</sup> ==== Steiger<ref>{{cite journal |last= Steiger|first=J. H. |date= 2004|title= Beyond the F test: Effect size confidence intervals and tests of close fit in the analysis of variance and contrast analysis|journal= Psychological Methods|volume= 9|issue= 2|pages= 164–182|doi= 10.1037/1082-989x.9.2.164|pmid=15137887 }}</ref> suggested a number of confidence procedures for common [[Effect size#Omega-squared (ω2)|effect size]] measures in [[Analysis of variance|ANOVA]]. Morey et al.<ref name=Morey /> point out that several of these confidence procedures, including the one for ''ω''<sup>2</sup>, have the property that as the ''F'' statistic becomes increasingly small—indicating misfit with all possible values of ''ω''<sup>2</sup>—the confidence interval shrinks and can even contain only the single value ''ω''<sup>2</sup> = 0; that is, the CI is infinitesimally narrow (this occurs when <math>p\geq1-\alpha/2</math> for a <math>100(1-\alpha)\%</math> CI). This behavior is consistent with the relationship between the confidence procedure and [[Statistical hypothesis testing|significance testing]]: as ''F'' becomes so small that the group means are much closer together than we would expect by chance, a significance test might indicate rejection for most or all values of ''ω''<sup>2</sup>. Hence the interval will be very narrow or even empty (or, by a convention suggested by Steiger, containing only 0). However, this does ''not'' indicate that the estimate of ''ω''<sup>2</sup> is very precise. In a sense, it indicates the opposite: that the trustworthiness of the results themselves may be in doubt. This is contrary to the common interpretation of confidence intervals that they reveal the precision of the estimate.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)