Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Likelihood function
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Relative likelihood function=== {{See also|Relative likelihood}} Since the actual value of the likelihood function depends on the sample, it is often convenient to work with a standardized measure. Suppose that the [[maximum likelihood estimate]] for the parameter {{mvar|θ}} is <math display="inline">\hat{\theta}</math>. Relative plausibilities of other {{mvar|θ}} values may be found by comparing the likelihoods of those other values with the likelihood of <math display="inline">\hat{\theta}</math>. The ''relative likelihood'' of {{mvar|θ}} is defined to be<ref name='Kalbfleisch'>{{citation | author-link= James G. Kalbfleisch | last= Kalbfleisch | first= J. G. | year=1985 | title= Probability and Statistical Inference | publisher= Springer}} (§9.3).</ref><ref>{{citation| last= Azzalini | first= A. | title= Statistical Inference—Based on the likelihood | year= 1996 | publisher= [[Chapman & Hall]] | url= https://books.google.com/books?id=hyN6gXHvSo0C | isbn= 9780412606502 }} (§1.4.2).</ref><ref name='Sprott'>Sprott, D. A. (2000), ''Statistical Inference in Science'', Springer (chap. 2).</ref><ref>Davison, A. C. (2008), ''Statistical Models'', [[Cambridge University Press]] (§4.1.2).</ref><ref>{{citation|first1= L. | last1= Held | first2= D. S. | last2= Sabanés Bové | title= Applied Statistical Inference—Likelihood and Bayes | year= 2014 | publisher= Springer}} (§2.1).</ref> <math display="block">R(\theta) = \frac{\mathcal{L}(\theta \mid x)}{\mathcal{L}(\hat{\theta} \mid x)}.</math> Thus, the relative likelihood is the likelihood ratio (discussed above) with the fixed denominator <math display="inline"> \mathcal{L}(\hat{\theta})</math>. This corresponds to standardizing the likelihood to have a maximum of 1. ====Likelihood region==== A ''likelihood region'' is the set of all values of {{mvar|θ}} whose relative likelihood is greater than or equal to a given threshold. In terms of percentages, a ''{{mvar|p}}% likelihood region'' for {{mvar|θ}} is defined to be<ref name='Kalbfleisch'/><ref name='Sprott'/><ref name="Rossi2018">{{citation | last= Rossi | first= R. J. | year= 2018 | title= Mathematical Statistics | publisher= [[Wiley (publisher)|Wiley]] | page= 267 }}.</ref> <math display="block"> \left\{\theta : R(\theta) \ge \frac p {100} \right\}. </math> If {{mvar|θ}} is a single real parameter, a {{mvar|p}}% likelihood region will usually comprise an [[Interval (mathematics)|interval]] of real values. If the region does comprise an interval, then it is called a ''likelihood interval''.<ref name='Kalbfleisch'/><ref name='Sprott'/><ref name=Hudson>{{Citation | last1 = Hudson | first1 = D. J. | title = Interval estimation from the likelihood function | journal = [[Journal of the Royal Statistical Society, Series B]] | volume = 33 | issue = 2 | pages = 256–262 | year = 1971 | doi = 10.1111/j.2517-6161.1971.tb00877.x }}.</ref> Likelihood intervals, and more generally likelihood regions, are used for [[interval estimation]] within likelihoodist statistics: they are similar to [[confidence interval]]s in frequentist statistics and [[credible interval]]s in Bayesian statistics. Likelihood intervals are interpreted directly in terms of relative likelihood, not in terms of [[coverage probability]] (frequentism) or [[posterior probability]] (Bayesianism). Given a model, likelihood intervals can be compared to confidence intervals. If {{mvar|θ}} is a single real parameter, then under certain conditions, a 14.65% likelihood interval (about 1:7 likelihood) for {{mvar|θ}} will be the same as a 95% confidence interval (19/20 coverage probability).<ref name='Kalbfleisch'/><ref name="Rossi2018"/> In a slightly different formulation suited to the use of log-likelihoods (see [[Likelihood-ratio test#Distribution: Wilks.27 theorem|Wilks' theorem]]), the test statistic is twice the difference in log-likelihoods and the probability distribution of the test statistic is approximately a [[chi-squared distribution]] with degrees-of-freedom (df) equal to the difference in df's between the two models (therefore, the {{mvar|e}}<sup>−2</sup> likelihood interval is the same as the 0.954 confidence interval; assuming difference in df's to be 1).<ref name="Rossi2018"/><ref name=Hudson/>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)