Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Log-normal distribution
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Probability distribution}} {{Infobox probability distribution | name = Log-normal distribution | type = continuous | pdf_image = [[Image:Log-normal-pdfs.png|300px|Plot of the Lognormal PDF]]<br/><small>Identical parameter <math> \mu </math> but differing parameters <math>\sigma</math></small> | cdf_image = [[Image:Log-normal-cdfs.png|300px|Plot of the Lognormal CDF]]<br/><small><math> \mu = 0 </math></small> | notation = <math> \operatorname{Lognormal}\left( \mu,\,\sigma^2 \right) </math> | parameters = {{plainlist | * <math> \mu \in ( -\infty, +\infty ) </math> (logarithm of [[location parameter|location]]), * <math> \sigma > 0 </math> (logarithm of [[scale parameter|scale]]) }} | support = <math> x \in ( 0, +\infty ) </math> | pdf = <math> \frac{ 1 }{ x \sigma \sqrt{2\pi } } \exp\left( - \frac{ \left( \ln x - \mu \right)^2}{ 2 \sigma^2 } \right)</math> | cdf = <math>\begin{align} &\frac{ 1 }{2}\left[1 + \operatorname{erf}\left( \frac{ \ln x - \mu }{\sigma\sqrt{2}} \right)\right] \\[1ex] &= \Phi{\left(\frac{\ln x -\mu}{\sigma} \right)} \end{align}</math> | quantile = <math>\begin{align} &\exp\left( \mu + \sqrt{2\sigma^2}\operatorname{erf}^{-1}(2 p - 1) \right) \\[1ex] & = \exp(\mu + \sigma \Phi^{-1}(p)) \end{align}</math> | mean = <math> \exp\left( \mu + \frac{\sigma^2}{2} \right) </math> | median = <math> \exp( \mu ) </math> | mode = <math> \exp\left( \mu - \sigma^2 \right) </math> | variance = <math> \left[ \exp(\sigma^2) - 1 \right] \exp\left( 2 \mu + \sigma^2\right ) </math> | skewness = <math> \left[ \exp\left( \sigma^2 \right) + 2 \right] \sqrt{\exp(\sigma^2) - 1 }</math> | kurtosis = <math> \exp\left( 4 \sigma^2 \right) + 2 \exp\left( 3 \sigma^2 \right) + 3 \exp\left( 2\sigma^2 \right) - 6 </math> | entropy = <math> \log_2 \left( \sqrt{2\pi e} \, \sigma e^{ \mu } \right) </math> | mgf = defined only for numbers with a {{nowrap|non-positive}} real part, see text | char = representation <math> \sum_{n=0}^{\infty} \frac{ {\left(i t\right)}^n }{ n! }e^{ n \mu + n^2 \sigma^2/2} </math> is asymptotically divergent, but adequate for most numerical purposes | fisher = <math> \frac{1}{\sigma^2} \begin{pmatrix} 1 & 0 \\ 0 & 2 \end{pmatrix} </math> | moments = <math> \mu = \ln \operatorname{E}[X] - \frac{1}{2} \ln\left( \frac{ \operatorname{Var}[X] }{ \operatorname{E}[X]^2 } + 1 \right),</math> <br/> <math> \sigma = \sqrt{ \ln \left( \frac{ \operatorname{Var}[X] }{ \operatorname{E}[X]^2 } + 1 \right) } </math> | ES = <math>\begin{align} &\frac{ e^{ \mu + \frac{ \sigma^2 }{2}} }{ 2p } \left[ 1 + \operatorname{erf} \left( \frac{ \sigma }{ \sqrt{2} } + \operatorname{erf}^{-1}(2p-1) \right) \right] \\[0.5ex] &= \frac{e^{ \mu + \frac{ \sigma^2 }{2}}}{1-p} \left[1 - \Phi(\Phi^{-1}(p) - \sigma)\right] \end{align}</math><ref name="norton">{{cite journal | last1 = Norton | first1 = Matthew | last2 = Khokhlov | first2 = Valentyn | last3 = Uryasev | first3 = Stan | year = 2019 | title = Calculating CVaR and bPOE for common probability distributions with application to portfolio optimization and density estimation | journal = [[Annals of Operations Research]] | volume = 299 | issue = 1–2 | pages = 1281–1315 | publisher = Springer | doi = 10.1007/s10479-019-03373-1 | arxiv = 1811.11301 | s2cid = 254231768 | url = http://uryasev.ams.stonybrook.edu/wp-content/uploads/2019/10/Norton2019_CVaR_bPOE.pdf | archive-url = https://web.archive.org/web/20210418151110/http://uryasev.ams.stonybrook.edu/wp-content/uploads/2019/10/Norton2019_CVaR_bPOE.pdf | archive-date = 2021-04-18 | url-status = live | access-date = 2023-02-27 | via = stonybrook.edu }} </ref> }} <!-- end "infobox" --> In [[probability theory]], a '''log-normal''' (or '''lognormal''') '''distribution''' is a continuous [[probability distribution]] of a [[random variable]] whose [[logarithm]] is [[normal distribution|normally distributed]]. Thus, if the random variable {{mvar|X}} is log-normally distributed, then {{math|1=''Y'' = ln ''X''}} has a normal distribution.<ref name=":1">{{Cite web | last = Weisstein | first = Eric W. | title = Log Normal Distribution | website = mathworld.wolfram.com | language = en | url = https://mathworld.wolfram.com/LogNormalDistribution.html | access-date = 2020-09-13 }}</ref><ref name=":2">{{cite web | title = 1.3.6.6.9. Lognormal Distribution | website = www.itl.nist.gov | publisher = U.S. [[National Institute of Standards and Technology]] (NIST) | url = https://www.itl.nist.gov/div898/handbook/eda/section3/eda3669.htm |access-date = 2020-09-13 }}</ref> Equivalently, if {{mvar|Y}} has a normal distribution, then the [[exponential function]] of {{mvar|Y}}, {{math|1=''X'' = exp(''Y'')}}, has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values. It is a convenient and useful model for measurements in exact and [[engineering]] sciences, as well as [[medicine]], [[economics]] and other topics (e.g., energies, concentrations, lengths, prices of financial instruments, and other metrics). The distribution is occasionally referred to as the '''Galton distribution''' or '''Galton's distribution''', after [[Francis Galton]].<ref name="JKB"/> The log-normal distribution has also been associated with other names, such as [[Donald MacAlister#log-normal|McAlister]], [[Gibrat's law|Gibrat]] and [[Cobb–Douglas]].<ref name="JKB"/> A log-normal process is the statistical realization of the multiplicative [[mathematical product|product]] of many [[statistical independence|independent]] [[random variable]]s, each of which is positive. This is justified by considering the [[central limit theorem]] in the log domain (sometimes called [[Gibrat's law]]). The log-normal distribution is the [[maximum entropy probability distribution]] for a random variate {{mvar|X}}—for which the mean and variance of {{math|ln ''X''}} are specified.<ref>{{cite journal | last1 = Park | first1 = Sung Y. | last2 = Bera | first2 = Anil K. | year = 2009 | title = Maximum entropy autoregressive conditional heteroskedasticity model | journal = Journal of Econometrics | volume = 150 | issue = 2 | pages = 219–230, esp. Table 1, p. 221 | citeseerx = 10.1.1.511.9750 | doi = 10.1016/j.jeconom.2008.12.014 | url = http://www.wise.xmu.edu.cn/Master/Download/..%5C..%5CUploadFiles%5Cpaper-masterdownload%5C2009519932327055475115776.pdf | url-status = dead | archive-url = https://web.archive.org/web/20160307144515/http://wise.xmu.edu.cn/uploadfiles/paper-masterdownload/2009519932327055475115776.pdf | archive-date = 2016-03-07 | access-date = 2011-06-02}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)