Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Maximum likelihood estimation
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Properties == A maximum likelihood estimator is an [[extremum estimator]] obtained by maximizing, as a function of ''θ'', the [[objective function]] <math>\widehat{\ell\,}(\theta\,;x)</math>. If the data are [[independent and identically distributed]], then we have <math display="block"> \widehat{\ell\,}(\theta\,;x)= \sum_{i=1}^n \ln f(x_i\mid\theta), </math> this being the sample analogue of the expected log-likelihood <math>\ell(\theta) = \operatorname{\mathbb E}[\, \ln f(x_i\mid\theta) \,]</math>, where this expectation is taken with respect to the true density. Maximum-likelihood estimators have no optimum properties for finite samples, in the sense that (when evaluated on finite samples) other estimators may have greater concentration around the true parameter-value.<ref>{{harvtxt |Pfanzagl |1994 |p=206 }}</ref> However, like other estimation methods, maximum likelihood estimation possesses a number of attractive [[Asymptotic theory (statistics)|limiting properties]]: As the sample size increases to infinity, sequences of maximum likelihood estimators have these properties: * [[consistency of an estimator|Consistency]]: the sequence of MLEs converges in probability to the value being estimated. * [[Invariant estimator|Equivariance]]: If <math> \hat{\theta} </math> is the maximum likelihood estimator for <math> \theta </math>, and if <math> g(\theta) </math> is a bijective transform of <math> \theta </math>, then the maximum likelihood estimator for <math> \alpha = g(\theta) </math> is <math> \hat{\alpha} = g(\hat{\theta} ) </math>. The equivariance property can be generalized to non-bijective transforms, although it applies in that case on the maximum of an induced likelihood function which is not the true likelihood in general. * [[efficient estimator|Efficiency]], i.e. it achieves the [[Cramér–Rao lower bound]] when the sample size tends to infinity. This means that no consistent estimator has lower asymptotic [[mean squared error]] than the MLE (or other estimators attaining this bound), which also means that MLE has [[Local asymptotic normality|asymptotic normality]]. * Second-order efficiency after correction for bias. === Consistency === Under the conditions outlined below, the maximum likelihood estimator is [[consistent estimator|consistent]]. The consistency means that if the data were generated by <math>f(\cdot\,;\theta_0)</math> and we have a sufficiently large number of observations ''n'', then it is possible to find the value of ''θ''<sub>0</sub> with arbitrary precision. In mathematical terms this means that as ''n'' goes to infinity the estimator <math>\widehat{\theta\,}</math> [[convergence in probability|converges in probability]] to its true value: <math display="block"> \widehat{\theta\,}_\mathrm{mle}\ \xrightarrow{\text{p}}\ \theta_0. </math> Under slightly stronger conditions, the estimator converges [[almost sure convergence|almost surely]] (or ''strongly''): <math display="block"> \widehat{\theta\,}_\mathrm{mle}\ \xrightarrow{\text{a.s.}}\ \theta_0. </math> In practical applications, data is never generated by <math>f(\cdot\,;\theta_0)</math>. Rather, <math>f(\cdot\,;\theta_0)</math> is a model, often in idealized form, of the process generated by the data. It is a common aphorism in statistics that ''[[all models are wrong]]''. Thus, true consistency does not occur in practical applications. Nevertheless, consistency is often considered to be a desirable property for an estimator to have. To establish consistency, the following conditions are sufficient.<ref>By Theorem 2.5 in {{cite book | last1 = Newey | first1 = Whitney K. | last2 = McFadden | first2 = Daniel | author-link2 = Daniel McFadden | chapter = Chapter 36: Large sample estimation and hypothesis testing | editor1-first= Robert | editor1-last=Engle | editor2-first=Dan | editor2-last=McFadden | title = Handbook of Econometrics, Vol.4 | year = 1994 | publisher = Elsevier Science | pages = 2111–2245 | isbn=978-0-444-88766-5 }}</ref> {{ordered list |1= [[Identifiability|Identification]] of the model: <math display="block"> \theta \neq \theta_0 \quad \Leftrightarrow \quad f(\cdot\mid\theta)\neq f(\cdot\mid\theta_0). </math> In other words, different parameter values ''θ'' correspond to different distributions within the model. If this condition did not hold, there would be some value ''θ''<sub>1</sub> such that ''θ''<sub>0</sub> and ''θ''<sub>1</sub> generate an identical distribution of the observable data. Then we would not be able to distinguish between these two parameters even with an infinite amount of data—these parameters would have been [[observational equivalence|observationally equivalent]]. <br /> The identification condition is absolutely necessary for the ML estimator to be consistent. When this condition holds, the limiting likelihood function ''ℓ''(''θ''{{!}}·) has unique global maximum at ''θ''<sub>0</sub>. |2= Compactness: the parameter space Θ of the model is [[compact set|compact]]. [[File:Ee noncompactness.svg|240px|right]] The identification condition establishes that the log-likelihood has a unique global maximum. Compactness implies that the likelihood cannot approach the maximum value arbitrarily close at some other point (as demonstrated for example in the picture on the right). Compactness is only a sufficient condition and not a necessary condition. Compactness can be replaced by some other conditions, such as: {{unordered list | both [[Concave function|concavity]] of the log-likelihood function and compactness of some (nonempty) upper [[level set]]s of the log-likelihood function, or | existence of a compact [[Neighbourhood (mathematics)|neighborhood]] {{mvar|N}} of {{mvar|θ}}<sub>0</sub> such that outside of {{mvar|N}} the log-likelihood function is less than the maximum by at least some {{nowrap|{{mvar|ε}} > 0}}. }} |3= Continuity: the function {{math|ln ''f''(''x'' {{!}} ''θ'')}} is continuous in {{mvar|θ}} for almost all values of {{mvar|x}}: <math display="block"> \operatorname{\mathbb P} \Bigl[\; \ln f(x\mid\theta) \;\in\; C^0(\Theta) \;\Bigr] = 1. </math> The continuity here can be replaced with a slightly weaker condition of [[upper semi-continuous|upper semi-continuity]]. |4= Dominance: there exists {{math|''D''(''x'')}} integrable with respect to the distribution {{math|''f''(''x'' {{!}} ''θ''<sub>0</sub>)}} such that <math display="block"> \Bigl|\ln f(x\mid\theta)\Bigr| < D(x) \quad \text{ for all } \theta\in\Theta. </math> By the [[uniform law of large numbers]], the dominance condition together with continuity establish the uniform convergence in probability of the log-likelihood: <math display="block"> \sup_{\theta\in\Theta} \left|\widehat{\ell\,}(\theta\mid x) - \ell(\theta)\,\right|\ \xrightarrow{\text{p}}\ 0. </math> }} The dominance condition can be employed in the case of [[i.i.d.]] observations. In the non-i.i.d. case, the uniform convergence in probability can be checked by showing that the sequence <math>\widehat{\ell\,}(\theta\mid x)</math> is [[stochastic equicontinuity|stochastically equicontinuous]]. If one wants to demonstrate that the ML estimator <math>\widehat{\theta\,}</math> converges to ''θ''<sub>0</sub> [[almost sure convergence|almost surely]], then a stronger condition of uniform convergence almost surely has to be imposed: <math display="block"> \sup_{\theta\in\Theta} \left\|\;\widehat{\ell\,}(\theta\mid x) - \ell(\theta)\;\right\| \ \xrightarrow{\text{a.s.}}\ 0. </math> Additionally, if (as assumed above) the data were generated by <math>f(\cdot\,;\theta_0)</math>, then under certain conditions, it can also be shown that the maximum likelihood estimator [[Convergence in distribution|converges in distribution]] to a normal distribution. Specifically,<ref name=":1">By Theorem 3.3 in {{cite book | last1 = Newey | first1 = Whitney K. | last2 = McFadden | first2 = Daniel | author-link2 = Daniel McFadden | chapter = Chapter 36: Large sample estimation and hypothesis testing | editor1-first= Robert | editor1-last=Engle | editor2-first=Dan | editor2-last=McFadden | title = Handbook of Econometrics, Vol.4 | year = 1994 | publisher = Elsevier Science | pages = 2111–2245 | isbn=978-0-444-88766-5 }}</ref> <math display="block"> \sqrt{n} \left(\widehat{\theta\,}_\mathrm{mle} - \theta_0\right)\ \xrightarrow{d}\ \mathcal{N}\left(0,\, I^{-1}\right) </math> where {{math|''I''}} is the [[Fisher information|Fisher information matrix]]. === Functional invariance === The maximum likelihood estimator selects the parameter value which gives the observed data the largest possible probability (or probability density, in the continuous case). If the parameter consists of a number of components, then we define their separate maximum likelihood estimators, as the corresponding component of the MLE of the complete parameter. Consistent with this, if <math>\widehat{\theta\,}</math> is the MLE for <math>\theta</math>, and if <math>g(\theta)</math> is any transformation of <math>\theta</math>, then the MLE for <math>\alpha=g(\theta)</math> is by definition<ref>{{cite book |first=Shelemyahu |last=Zacks |title=The Theory of Statistical Inference |location=New York |publisher=John Wiley & Sons |year=1971 |isbn=0-471-98103-6 |page=223 }}</ref> <math display="block">\widehat{\alpha} = g(\,\widehat{\theta\,}\,). \,</math> It maximizes the so-called [[Likelihood function#Profile likelihood|profile likelihood]]: <math display="block">\bar{L}(\alpha) = \sup_{\theta: \alpha = g(\theta)} L(\theta). \, </math> The MLE is also equivariant with respect to certain transformations of the data. If <math>y=g(x)</math> where <math>g</math> is one to one and does not depend on the parameters to be estimated, then the density functions satisfy <math display="block">f_Y(y) = f_X(g^{-1}(y)) \, |(g^{-1}(y))^{\prime}| </math> and hence the likelihood functions for <math>X</math> and <math>Y</math> differ only by a factor that does not depend on the model parameters. For example, the MLE parameters of the log-normal distribution are the same as those of the normal distribution fitted to the logarithm of the data. In fact, in the log-normal case if <math>X\sim\mathcal{N}(0, 1)</math>, then <math>Y=g(X)=e^{X} </math> follows a [[log-normal distribution]]. The density of Y follows with <math> f_X</math> standard [[Normal distribution|Normal]] and <math>g^{-1}(y) = \log(y) </math>, <math>|(g^{-1}(y))^{\prime}| = \frac{1}{y} </math> for <math> y > 0</math>. === Efficiency === As assumed above, if the data were generated by <math>~f(\cdot\,;\theta_0)~,</math> then under certain conditions, it can also be shown that the maximum likelihood estimator [[Convergence in distribution|converges in distribution]] to a normal distribution. It is {{sqrt|''n'' }}-consistent and asymptotically efficient, meaning that it reaches the [[Cramér–Rao bound]]. Specifically,<ref name=":1"/> <math display="block"> \sqrt{n\,} \, \left( \widehat{\theta\,}_\text{mle} - \theta_0 \right)\ \ \xrightarrow{d}\ \ \mathcal{N} \left( 0,\ \mathcal{I}^{-1} \right) ~, </math> where <math>~\mathcal{I}~</math> is the [[Fisher information matrix]]: <math display="block"> \mathcal{I}_{jk} = \operatorname{\mathbb E} \, \biggl[ \; -{ \frac{\partial^2\ln f_{\theta_0}(X_t)}{\partial\theta_j\,\partial\theta_k } } \; \biggr] ~. </math> In particular, it means that the [[bias of an estimator|bias]] of the maximum likelihood estimator is equal to zero up to the order {{sfrac|1|{{sqrt|{{mvar|n}} }}}}. === Second-order efficiency after correction for bias === However, when we consider the higher-order terms in the [[Edgeworth expansion|expansion]] of the distribution of this estimator, it turns out that {{math|''θ''<sub>mle</sub>}} has bias of order {{frac|1|{{mvar|n}}}}. This bias is equal to (componentwise)<ref>See formula 20 in {{cite journal | last1 = Cox | first1 = David R. | author-link1=David R. Cox | last2 = Snell | first2 = E. Joyce | author-link2 = Joyce Snell | title = A general definition of residuals | year = 1968 | journal = [[Journal of the Royal Statistical Society, Series B]] | pages = 248–275 | jstor = 2984505 | volume=30 | issue = 2 }} </ref> <math display="block"> b_h \; \equiv \; \operatorname{\mathbb E} \biggl[ \; \left( \widehat\theta_\mathrm{mle} - \theta_0 \right)_h \; \biggr] \; = \; \frac{1}{\,n\,} \, \sum_{i, j, k = 1}^m \; \mathcal{I}^{h i} \; \mathcal{I}^{j k} \left( \frac{1}{\,2\,} \, K_{i j k} \; + \; J_{j,i k} \right) </math> where <math>\mathcal{I}^{j k}</math> (with superscripts) denotes the (''j,k'')-th component of the ''inverse'' Fisher information matrix <math>\mathcal{I}^{-1}</math>, and <math display="block"> \frac{1}{\,2\,} \, K_{i j k} \; + \; J_{j,i k} \; = \; \operatorname{\mathbb E}\,\biggl[\; \frac12 \frac{\partial^3 \ln f_{\theta_0}(X_t)}{\partial\theta_i\;\partial\theta_j\;\partial\theta_k} + \frac{\;\partial\ln f_{\theta_0}(X_t)\;}{\partial\theta_j}\,\frac{\;\partial^2\ln f_{\theta_0}(X_t)\;}{\partial\theta_i \, \partial\theta_k} \; \biggr] ~ . </math> Using these formulae it is possible to estimate the second-order bias of the maximum likelihood estimator, and ''correct'' for that bias by subtracting it: <math display="block"> \widehat{\theta\,}^*_\text{mle} = \widehat{\theta\,}_\text{mle} - \widehat{b\,} ~ . </math> This estimator is unbiased up to the terms of order {{sfrac|1| {{mvar|n}} }}, and is called the '''bias-corrected maximum likelihood estimator'''. This bias-corrected estimator is {{em|second-order efficient}} (at least within the curved exponential family), meaning that it has minimal mean squared error among all second-order bias-corrected estimators, up to the terms of the order {{sfrac|1| {{mvar|n}}<sup>2</sup> }} . It is possible to continue this process, that is to derive the third-order bias-correction term, and so on. However, the maximum likelihood estimator is ''not'' third-order efficient.<ref> {{cite journal |last = Kano |first = Yutaka |title = Third-order efficiency implies fourth-order efficiency |year = 1996 |journal = Journal of the Japan Statistical Society |volume = 26 |pages = 101–117 |doi = 10.14490/jjss1995.26.101 |doi-access= free }} </ref> === Relation to Bayesian inference === A maximum likelihood estimator coincides with the [[Maximum a posteriori|most probable]] [[Bayesian estimator]] given a [[Uniform distribution (continuous)|uniform]] [[prior probability|prior distribution]] on the [[parameter space|parameters]]. Indeed, the [[maximum a posteriori estimate]] is the parameter {{mvar|θ}} that maximizes the probability of {{mvar|θ}} given the data, given by Bayes' theorem: <math display="block"> \operatorname{\mathbb P}(\theta\mid x_1,x_2,\ldots,x_n) = \frac{f(x_1,x_2,\ldots,x_n\mid\theta)\operatorname{\mathbb P}(\theta)}{\operatorname{\mathbb P}(x_1,x_2,\ldots,x_n)} </math> where <math>\operatorname{\mathbb P}(\theta)</math> is the prior distribution for the parameter {{mvar|θ}} and where <math>\operatorname{\mathbb P}(x_1,x_2,\ldots,x_n)</math> is the probability of the data averaged over all parameters. Since the denominator is independent of {{mvar|θ}}, the Bayesian estimator is obtained by maximizing <math>f(x_1,x_2,\ldots,x_n\mid\theta)\operatorname{\mathbb P}(\theta)</math> with respect to {{mvar|θ}}. If we further assume that the prior <math>\operatorname{\mathbb P}(\theta)</math> is a uniform distribution, the Bayesian estimator is obtained by maximizing the likelihood function <math>f(x_1,x_2,\ldots,x_n\mid\theta)</math>. Thus the Bayesian estimator coincides with the maximum likelihood estimator for a uniform prior distribution <math>\operatorname{\mathbb P}(\theta)</math>. ==== Application of maximum-likelihood estimation in Bayes decision theory ==== In many practical applications in [[machine learning]], maximum-likelihood estimation is used as the model for parameter estimation. The Bayesian Decision theory is about designing a classifier that minimizes total expected risk, especially, when the costs (the loss function) associated with different decisions are equal, the classifier is minimizing the error over the whole distribution.<ref>{{cite web |last=Christensen |first=Henrikt I. |title=Pattern Recognition |publisher=Georgia Tech |series=Bayesian Decision Theory - CS 7616 |url=https://www.cc.gatech.edu/~hic/CS7616/pdf/lecture2.pdf |type=lecture}}</ref> Thus, the Bayes Decision Rule is stated as :"decide <math>\;w_1\;</math> if <math>~\operatorname{\mathbb P}(w_1|x) \; > \; \operatorname{\mathbb P}(w_2|x)~;~</math> otherwise decide <math>\;w_2\;</math>" where <math>\;w_1\,, w_2\;</math> are predictions of different classes. From a perspective of minimizing error, it can also be stated as <math display="block">w = \underset{ w }{\operatorname{arg\;max}} \; \int_{-\infty}^\infty \operatorname{\mathbb P}(\text{ error}\mid x)\operatorname{\mathbb P}(x)\,\operatorname{d}x~</math> where <math display="block">\operatorname{\mathbb P}(\text{ error}\mid x) = \operatorname{\mathbb P}(w_1\mid x)~</math> if we decide <math>\;w_2\;</math> and <math>\;\operatorname{\mathbb P}(\text{ error}\mid x) = \operatorname{\mathbb P}(w_2\mid x)\;</math> if we decide <math>\;w_1\;.</math> By applying [[Bayes' theorem]] <math display="block">\operatorname{\mathbb P}(w_i \mid x) = \frac{\operatorname{\mathbb P}(x \mid w_i) \operatorname{\mathbb P}(w_i)}{\operatorname{\mathbb P}(x)}</math>, and if we further assume the zero-or-one loss function, which is a same loss for all errors, the Bayes Decision rule can be reformulated as: <math display="block">h_\text{Bayes} = \underset{ w }{\operatorname{arg\;max}} \, \bigl[\, \operatorname{\mathbb P}(x\mid w)\,\operatorname{\mathbb P}(w) \,\bigr]\;,</math> where <math>h_\text{Bayes}</math> is the prediction and <math>\;\operatorname{\mathbb P}(w)\;</math> is the [[prior probability]]. === Relation to minimizing Kullback–Leibler divergence and cross entropy === Finding <math>\hat \theta</math> that maximizes the likelihood is asymptotically equivalent to finding the <math>\hat \theta</math> that defines a probability distribution (<math>Q_{\hat \theta}</math>) that has a minimal distance, in terms of [[Kullback–Leibler divergence]], to the real probability distribution from which our data were generated (i.e., generated by <math>P_{\theta_0}</math>).<ref>cmplx96 (https://stats.stackexchange.com/users/177679/cmplx96), Kullback–Leibler divergence, URL (version: 2017-11-18): https://stats.stackexchange.com/q/314472 (at the youtube video, look at minutes 13 to 25)</ref> In an ideal world, P and Q are the same (and the only thing unknown is <math>\theta</math> that defines P), but even if they are not and the model we use is misspecified, still the MLE will give us the "closest" distribution (within the restriction of a model Q that depends on <math>\hat \theta</math>) to the real distribution <math>P_{\theta_0}</math>.<ref>[https://web.stanford.edu/class/stats200/Lecture16.pdf Introduction to Statistical Inference | Stanford (Lecture 16 — MLE under model misspecification)]</ref> {| role="presentation" class="wikitable mw-collapsible mw-collapsed" | '''Proof.''' |- | For simplicity of notation, let's assume that P=Q. Let there be ''n'' [[i.i.d]] data samples <math>\mathbf{y} = (y_1, y_2, \ldots, y_n)</math> from some probability <math>y \sim P_{\theta_0}</math>, that we try to estimate by finding <math>\hat \theta</math> that will maximize the likelihood using <math>P_{\theta}</math>, then: <math display="block">\begin{align} \hat \theta &= \underset{\theta}{\operatorname{arg\,max}}\, L_{P_{\theta}}(\mathbf{y}) = \underset{\theta}{\operatorname{arg\,max}}\, P_{\theta} (\mathbf{y}) = \underset{\theta}{\operatorname{arg\,max}}\, P (\mathbf{y} \mid \theta)\\ &= \underset{\theta}{\operatorname{arg\,max}}\, \prod_{i=1}^n P (y_i \mid \theta) = \underset{\theta}{\operatorname{arg\,max}}\, \sum_{i=1}^n \log P (y_i \mid \theta) \\ &= \underset{\theta}{\operatorname{arg\,max}}\, \left( \sum_{i=1}^n \log P (y_i \mid \theta) - \sum_{i=1}^n \log P (y_i \mid \theta_0) \right) = \underset{\theta}{\operatorname{arg\,max}}\, \sum_{i=1}^n \left( \log P (y_i \mid \theta) - \log P (y_i \mid \theta_0) \right) \\ &= \underset{\theta}{\operatorname{arg\,max}}\, \sum_{i=1}^n \log\frac{P (y_i \mid \theta)}{P (y_i \mid \theta_0)} = \underset{\theta}{\operatorname{arg\,min}}\, \sum_{i=1}^n \log \frac{P (y_i \mid \theta_0)}{P (y_i \mid \theta)} = \underset{\theta}{\operatorname{arg\,min}}\, \frac{1}{n} \sum_{i=1}^n \log \frac{P (y_i \mid \theta_0)}{P (y_i \mid \theta)} \\ &= \underset{\theta}{\operatorname{arg\,min}}\, \frac{1}{n} \sum_{i=1}^n h_{\theta}(y_i) \quad \underset{n\to\infty}{\longrightarrow} \quad \underset{\theta}{\operatorname{arg\,min}}\, E [ h_{\theta}(y) ] \\ &=\underset{\theta}{\operatorname{arg\,min}}\, \int P_{\theta_0}(y) h_\theta(y) dy = \underset{\theta}{\operatorname{arg\,min}}\, \int P_{\theta_0}(y) \log \frac{P (y \mid \theta_0)}{P (y \mid \theta)} dy\\ &= \underset{\theta}{\operatorname{arg\,min}}\, D_\text{KL}(P_{\theta_0} \parallel P_{\theta}) \end{align}</math> Where <math>h_{\theta}(x) = \log \frac{P (x \mid \theta_0)}{P (x \mid \theta)}</math>. Using ''h'' helps see how we are using the [[law of large numbers]] to move from the average of ''h''(''x'') to the [[Expected value|expectancy]] of it using the [[law of the unconscious statistician]]. The first several transitions have to do with laws of [[logarithm]] and that finding <math>\hat \theta</math> that maximizes some function will also be the one that maximizes some monotonic transformation of that function (i.e.: adding/multiplying by a constant). Since [[Kullback–Leibler divergence#Cross entropy|cross entropy]] is just [[Entropy (information theory)|Shannon's entropy]] plus KL divergence, and since the entropy of <math>P_{\theta_0}</math> is constant, then the MLE is also asymptotically minimizing cross entropy.<ref>Sycorax says Reinstate Monica (https://stats.stackexchange.com/users/22311/sycorax-says-reinstate-monica), the relationship between maximizing the likelihood and minimizing the cross-entropy, URL (version: 2019-11-06): https://stats.stackexchange.com/q/364237</ref> |}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)