Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Likelihood-ratio test
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Statistical test that compares goodness of fit}} {{About|the statistical test that compares goodness of fit|a general description of the likelihood ratio|Likelihood ratio|the use of likelihood ratios in interpreting diagnostic tests|Likelihood ratios in diagnostic testing}} In [[statistics]], the '''likelihood-ratio test''' is a [[hypothesis test]] that involves comparing the [[goodness of fit]] of two competing [[statistical model]]s, typically one found by [[Mathematical optimization|maximization]] over the entire [[parameter space]] and another found after imposing some [[Constraint (mathematics)|constraint]], based on the ratio of their [[likelihood function|likelihoods]]. If the more constrained model (i.e., the [[null hypothesis]]) is supported by the [[Realization (probability)|observed data]], the two likelihoods should not differ by more than [[sampling error]].<ref>{{cite book |first=Gary |last=King |author-link=Gary King (political scientist) |title=Unifying Political Methodology : The Likelihood Theory of Statistical Inference |location=New York |publisher=Cambridge University Press |year=1989 |isbn=0-521-36697-6 |page=84 |url=https://books.google.com/books?id=cligOwrd7XoC&pg=PA84 }}</ref> Thus the likelihood-ratio test tests whether this ratio is [[Statistical significance|significantly different]] from one, or equivalently whether its [[natural logarithm]] is significantly different from zero. The likelihood-ratio test, also known as '''Wilks test''',<ref>{{cite book |first1=Bing |last1=Li |first2=G. Jogesh |last2=Babu |title=A Graduate Course on Statistical Inference |location= |publisher=Springer |year=2019 |page=331 |isbn=978-1-4939-9759-6 }}</ref> is the oldest of the three classical approaches to hypothesis testing, together with the [[Lagrange multiplier test]] and the [[Wald test]].<ref>{{cite book |first1=G. S. |last1=Maddala |author-link=G. S. Maddala |first2=Kajal |last2=Lahiri |title=Introduction to Econometrics |location=New York |publisher=Wiley |edition=Fourth |year=2010 |page=200 }}</ref> In fact, the latter two can be conceptualized as approximations to the likelihood-ratio test, and are asymptotically equivalent.<ref>{{cite journal |first=A. |last=Buse |title=The Likelihood Ratio, Wald, and Lagrange Multiplier Tests: An Expository Note |journal=[[The American Statistician]] |volume=36 |issue=3a |year=1982 |pages=153–157 |doi=10.1080/00031305.1982.10482817 }}</ref><ref>{{cite book |first=Andrew |last=Pickles |title=An Introduction to Likelihood Analysis |location=Norwich |publisher=W. H. Hutchins & Sons |year=1985 |isbn=0-86094-190-6 |pages=[https://archive.org/details/introductiontoli0000pick/page/24 24–27] |url=https://archive.org/details/introductiontoli0000pick/page/24 }}</ref><ref>{{cite book |first=Thomas A. |last=Severini |title=Likelihood Methods in Statistics |location=New York |publisher=Oxford University Press |year=2000 |isbn=0-19-850650-3 |pages=120–121 }}</ref> In the case of comparing two models each of which has no unknown [[statistical parameters|parameters]], use of the likelihood-ratio test can be justified by the [[Neyman–Pearson lemma]]. The lemma demonstrates that the test has the highest [[statistical power|power]] among all competitors.<ref name="NeymanPearson1933">{{citation | last1 = Neyman | first1 = J. | author-link1 = Jerzy Neyman| last2 = Pearson | first2 = E. S. | author-link2 = Egon Pearson| doi = 10.1098/rsta.1933.0009 | title = On the problem of the most efficient tests of statistical hypotheses | journal = [[Philosophical Transactions of the Royal Society of London A]] | volume = 231 | issue = 694–706 | pages = 289–337 | year = 1933 | jstor = 91247 |bibcode = 1933RSPTA.231..289N | url = http://www.stats.org.uk/statistical-inference/NeymanPearson1933.pdf | doi-access = free }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)