Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Exponential distribution
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Statistical inference== Below, suppose random variable ''X'' is exponentially distributed with rate parameter λ, and <math>x_1, \dotsc, x_n</math> are ''n'' independent samples from ''X'', with sample mean <math>\bar{x}</math>. ===Parameter estimation=== The [[maximum likelihood]] estimator for λ is constructed as follows. The [[likelihood function]] for λ, given an [[independent and identically distributed]] sample ''x'' = (''x''<sub>1</sub>, ..., ''x''<sub>''n''</sub>) drawn from the variable, is: <math display="block"> L(\lambda) = \prod_{i=1}^n\lambda\exp(-\lambda x_i) = \lambda^n\exp\left(-\lambda \sum_{i=1}^n x_i\right) = \lambda^n\exp\left(-\lambda n\overline{x}\right), </math> where: <math display="block">\overline{x} = \frac{1}{n}\sum_{i=1}^n x_i</math> is the sample mean. The derivative of the likelihood function's logarithm is: <math display="block"> \frac{d}{d\lambda} \ln L(\lambda) = \frac{d}{d\lambda} \left( n \ln\lambda - \lambda n\overline{x} \right) = \frac{n}{\lambda} - n\overline{x}\ \begin{cases} > 0, & 0 < \lambda < \frac{1}{\overline{x}}, \\[8pt] = 0, & \lambda = \frac{1}{\overline{x}}, \\[8pt] < 0, & \lambda > \frac{1}{\overline{x}}. \end{cases} </math> Consequently, the [[maximum likelihood]] estimate for the rate parameter is: <math display="block">\widehat{\lambda}_\text{mle} = \frac{1}{\overline{x}} = \frac{n}{\sum_i x_i}</math> This is {{em|not}} an [[unbiased estimator]] of <math>\lambda,</math> although <math>\overline{x}</math> {{em|is}} an unbiased<ref name="Dean W. Wichern-2007">{{cite book|author1=Richard Arnold Johnson|author2=Dean W. Wichern|title=Applied Multivariate Statistical Analysis|url=https://books.google.com/books?id=gFWcQgAACAAJ|access-date=10 August 2012|year=2007 |publisher=Pearson Prentice Hall|isbn=978-0-13-187715-3}}</ref> MLE<ref>''[http://www.itl.nist.gov/div898/handbook/eda/section3/eda3667.htm NIST/SEMATECH e-Handbook of Statistical Methods]''</ref> estimator of <math>1/\lambda</math> and the distribution mean. The bias of <math> \widehat{\lambda}_\text{mle} </math> is equal to <math display="block">B \equiv \operatorname{E}\left[\left(\widehat{\lambda}_\text{mle} - \lambda\right)\right] = \frac{\lambda}{n - 1} </math> which yields the [[Maximum likelihood estimation#Second-order efficiency after correction for bias|bias-corrected maximum likelihood estimator]] <math display="block">\widehat{\lambda}^*_\text{mle} = \widehat{\lambda}_\text{mle} - B.</math> An approximate minimizer of [[mean squared error]] (see also: [[bias–variance tradeoff]]) can be found, assuming a sample size greater than two, with a correction factor to the MLE: <math display="block">\widehat{\lambda} = \left(\frac{n - 2}{n}\right) \left(\frac{1}{\bar{x}}\right) = \frac{n - 2}{\sum_i x_i}</math> This is derived from the mean and variance of the [[inverse-gamma distribution]], <math display="inline">\mbox{Inv-Gamma}(n, \lambda)</math>.<ref>{{cite journal |first1=Abdulaziz |last1=Elfessi |first2=David M. |last2=Reineke |title=A Bayesian Look at Classical Estimation: The Exponential Distribution |journal=Journal of Statistics Education |volume=9 |issue=1 |year=2001 |doi=10.1080/10691898.2001.11910648|doi-access=free }}</ref> ===Fisher information=== The [[Fisher information]], denoted <math>\mathcal{I}(\lambda)</math>, for an estimator of the rate parameter <math>\lambda</math> is given as: <math display="block">\mathcal{I}(\lambda) = \operatorname{E} \left[\left. \left(\frac{\partial}{\partial\lambda} \log f(x;\lambda)\right)^2\right|\lambda\right] = \int \left(\frac{\partial}{\partial\lambda} \log f(x;\lambda)\right)^2 f(x; \lambda)\,dx</math> Plugging in the distribution and solving gives: <math display="block"> \mathcal{I}(\lambda) = \int_{0}^{\infty} \left(\frac{\partial}{\partial\lambda} \log \lambda e^{-\lambda x}\right)^2 \lambda e^{-\lambda x}\,dx = \int_{0}^{\infty} \left(\frac{1}{\lambda} - x\right)^2 \lambda e^{-\lambda x}\,dx = \lambda^{-2}.</math> This determines the amount of information each independent sample of an exponential distribution carries about the unknown rate parameter <math>\lambda</math>. ===Confidence intervals=== An exact 100(1 − α)% confidence interval for the rate parameter of an exponential distribution is given by:<ref>{{cite book| title=Introduction to probability and statistics for engineers and scientists|first=Sheldon M.|last=Ross| page=267| url=https://books.google.com/books?id=mXP_UEiUo9wC&pg=PA267| edition=4th|year=2009| publisher=Associated Press| isbn=978-0-12-370483-2}}</ref> <math display="block">\frac{2n}{\widehat{\lambda}_{\textrm{mle}} \chi^2_{\frac{\alpha}{2},2n} }< \frac{1}{\lambda} < \frac{2n}{\widehat{\lambda}_{\textrm{mle}} \chi^2_{1-\frac{\alpha}{2},2n}}\,,</math> which is also equal to <math display="block">\frac{2n\overline{x}}{\chi^2_{\frac{\alpha}{2},2n}} < \frac{1}{\lambda} < \frac{2n\overline{x}}{\chi^2_{1-\frac{\alpha}{2},2n}}\,,</math> where {{math|χ{{su|p=2|b=''p'',''v''}}}} is the {{math|100(''p'')}} [[percentile]] of the [[chi squared distribution]] with ''v'' [[degrees of freedom (statistics)|degrees of freedom]], n is the number of observations and x-bar is the sample average. A simple approximation to the exact interval endpoints can be derived using a normal approximation to the {{math|''χ''{{su|p=2|b=''p'',''v''}}}} distribution. This approximation gives the following values for a 95% confidence interval: <math display="block">\begin{align} \lambda_\text{lower} &= \widehat{\lambda}\left(1 - \frac{1.96}{\sqrt{n}}\right) \\ \lambda_\text{upper} &= \widehat{\lambda}\left(1 + \frac{1.96}{\sqrt{n}}\right) \end{align}</math> This approximation may be acceptable for samples containing at least 15 to 20 elements.<ref name="Guerriero-2012">{{Cite journal | first1 = V. | last1= Guerriero | year = 2012 | title = Power Law Distribution: Method of Multi-scale Inferential Statistics| journal = Journal of Modern Mathematics Frontier | url =https://www.academia.edu/27459041 | volume = 1 | pages = 21–28}}</ref> ===Bayesian inference=== The [[conjugate prior]] for the exponential distribution is the [[gamma distribution]] (of which the exponential distribution is a special case). The following parameterization of the gamma probability density function is useful: <math display="block">\operatorname{Gamma}(\lambda; \alpha, \beta) = \frac{\beta^{\alpha}}{\Gamma(\alpha)} \lambda^{\alpha-1} \exp(-\lambda\beta).</math> The [[posterior distribution]] ''p'' can then be expressed in terms of the likelihood function defined above and a gamma prior: <math display="block">\begin{align} p(\lambda) &\propto L(\lambda) \Gamma(\lambda; \alpha, \beta) \\ &= \lambda^n \exp\left(-\lambda n\overline{x}\right) \frac{\beta^{\alpha}}{\Gamma(\alpha)} \lambda^{\alpha-1} \exp(-\lambda \beta) \\ &\propto \lambda^{(\alpha+n)-1} \exp(-\lambda \left(\beta + n\overline{x}\right)). \end{align}</math> Now the posterior density ''p'' has been specified up to a missing normalizing constant. Since it has the form of a gamma pdf, this can easily be filled in, and one obtains: <math display="block">p(\lambda) = \operatorname{Gamma}(\lambda; \alpha + n, \beta + n\overline{x}).</math> Here the [[Hyperparameter (Bayesian statistics)|hyperparameter]] ''α'' can be interpreted as the number of prior observations, and ''β'' as the sum of the prior observations. The posterior mean here is: <math display="block">\frac{\alpha + n}{\beta + n\overline{x}}.</math>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)