Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Likelihood-ratio test
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Case of simple hypotheses=== {{Main|Neyman–Pearson lemma}} A simple-vs.-simple hypothesis test has completely specified models under both the null hypothesis and the alternative hypothesis, which for convenience are written in terms of fixed values of a notional parameter <math>\theta</math>: :<math> \begin{align} H_0 &:& \theta=\theta_0 ,\\ H_1 &:& \theta=\theta_1 . \end{align} </math> In this case, under either hypothesis, the distribution of the data is fully specified: there are no unknown parameters to estimate. For this case, a variant of the likelihood-ratio test is available:<ref>{{cite book |last1=Mood |first1=A.M. |last2=Graybill |first2=F.A. |first3=D.C. |last3=Boes |year=1974 |title=Introduction to the Theory of Statistics |edition=3rd |publisher=[[McGraw-Hill]] |at=§9.2}}</ref><ref name="Stuart et al. 20.10–20.13">{{citation |last1=Stuart|first1=A. |last2=Ord |first2=K. |last3=Arnold |first3=S. |year=1999 |title=Kendall's Advanced Theory of Statistics |volume=2A |publisher=[[Edward Arnold (publisher)|Arnold]] |at=§§20.10–20.13}}</ref> :<math> \Lambda(x) = \frac{~\mathcal{L}(\theta_0\mid x) ~}{~\mathcal{L}(\theta_1\mid x) ~}. </math> Some older references may use the reciprocal of the function above as the definition.<ref>{{citation |author1-last=Cox |author1-first=D. R. |author1-link= David Cox (statistician)|author2-last=Hinkley |author2-first=D. V. | author2-link= David Hinkley |title=Theoretical Statistics |publisher= [[Chapman & Hall]] |year=1974 |isbn=0-412-12420-3 |page=92 }}</ref> Thus, the likelihood ratio is small if the alternative model is better than the null model. The likelihood-ratio test provides the decision rule as follows: :If <math>~\Lambda > c ~</math>, do not reject <math>H_0</math>; :If <math>~\Lambda < c ~</math>, reject <math>H_0</math>; :If <math>~\Lambda = c ~</math>, reject <math>H_0</math> with probability <math>~q~</math>. : The values <math>c</math> and <math>q</math> are usually chosen to obtain a specified [[significance level]] <math>\alpha</math>, via the relation :<math>~q~</math> <math> \operatorname{P}(\Lambda=c \mid H_0)~+~\operatorname{P}(\Lambda < c \mid H_0)~=~\alpha~. </math> The [[Neyman–Pearson lemma]] states that this likelihood-ratio test is the [[Statistical power|most powerful]] among all level <math>\alpha</math> tests for this case.<ref name="NeymanPearson1933"/><ref name="Stuart et al. 20.10–20.13"/>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)