Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Gamma distribution
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Properties == === Mean and variance === The mean of gamma distribution is given by the product of its shape and scale parameters: <math display=block>\mu = \alpha\theta = \alpha/\lambda</math> The variance is: <math display=block>\sigma^2 = \alpha \theta^2 = \alpha/\lambda^2</math> The square root of the inverse shape parameter gives the [[coefficient of variation]]: <math display=block>\sigma/\mu = \alpha^{-0.5} = 1/\sqrt{\alpha}</math> === Skewness === The [[skewness]] of the gamma distribution only depends on its shape parameter, {{mvar|α}}, and it is equal to <math>2/\sqrt{\alpha}.</math> === Higher moments === The {{mvar|n}}-th [[raw moment]] is given by: <math display=block> \mathrm{E}[X^n] = \theta^n \frac{\Gamma(\alpha+n)}{\Gamma(\alpha)} = \theta^n \prod_{i=1}^n(\alpha+i-1) \; \text{ for } n=1, 2, \ldots. </math> ===Median approximations and bounds=== [[File:Gamma distribution median bounds.png|thumb|320px|Bounds and asymptotic approximations to the median of the gamma distribution. The cyan-colored region indicates the large gap between published lower and upper bounds before 2021.]] Unlike the mode and the mean, which have readily calculable formulas based on the parameters, the median does not have a closed-form equation. The median for this distribution is the value <math>\nu</math> such that <math display=block>\frac{1}{\Gamma(\alpha) \theta^\alpha} \int_0^{\nu} x^{\alpha - 1} e^{-x/\theta} dx = \frac{1}{2}.</math> A rigorous treatment of the problem of determining an asymptotic expansion and bounds for the median of the gamma distribution was handled first by Chen and Rubin, who proved that (for <math>\theta = 1</math>) <math display=block> \alpha - \frac{1}{3} < \nu(\alpha) < \alpha, </math> where <math>\mu(\alpha) = \alpha</math> is the mean and <math>\nu(\alpha)</math> is the median of the <math>\text{Gamma}(\alpha,1)</math> distribution.<ref>Jeesen Chen, [[Herman Rubin]], Bounds for the difference between median and mean of gamma and Poisson distributions, Statistics & Probability Letters, Volume 4, Issue 6, October 1986, Pages 281–283, {{issn|0167-7152}}, [https://dx.doi.org/10.1016/0167-7152(86)90044-1] {{Webarchive|url=https://web.archive.org/web/20241009203229/https://www.sciencedirect.com/unsupported_browser|date=2024-10-09}}.</ref> For other values of the scale parameter, the mean scales to <math>\mu = \alpha\theta</math>, and the median bounds and approximations would be similarly scaled by {{mvar|θ}}. K. P. Choi found the first five terms in a [[Laurent series]] asymptotic approximation of the median by comparing the median to [[Ramanujan theta function|Ramanujan's <math> \theta </math> function]].<ref>Choi, K. P. [https://www.ams.org/journals/proc/1994-121-01/S0002-9939-1994-1195477-8/S0002-9939-1994-1195477-8.pdf "On the Medians of the Gamma Distributions and an Equation of Ramanujan"] {{Webarchive|url=https://web.archive.org/web/20210123121523/https://www.ams.org/journals/proc/1994-121-01/S0002-9939-1994-1195477-8/S0002-9939-1994-1195477-8.pdf |date=2021-01-23 }}, Proceedings of the American Mathematical Society, Vol. 121, No. 1 (May, 1994), pp. 245–251.</ref> Berg and Pedersen found more terms:<ref name="Pedersen, Henrik L.-2006">{{cite journal |author=Berg, Christian |author2=Pedersen, Henrik L. |name-list-style=amp |title=The Chen–Rubin conjecture in a continuous setting |journal=Methods and Applications of Analysis |date=March 2006 |volume=13 |issue=1 |pages=63–88 |doi=10.4310/MAA.2006.v13.n1.a4 |s2cid=6704865 |url=https://www.intlpress.com/site/pub/files/_fulltext/journals/maa/2006/0013/0001/MAA-2006-0013-0001-a004.pdf |access-date=1 April 2020 |doi-access=free |archive-date=16 January 2021 |archive-url=https://web.archive.org/web/20210116114105/https://www.intlpress.com/site/pub/files/_fulltext/journals/maa/2006/0013/0001/MAA-2006-0013-0001-a004.pdf |url-status=live }}</ref> <math display=block> \nu(\alpha) = \alpha - \frac{1}{3} + \frac{8}{405\alpha} + \frac{184}{25515 \alpha^2} + \frac{2248}{3444525 \alpha^3} - \frac{19006408}{15345358875 \alpha^4} - O\left(\frac{1}{\alpha^5}\right) + \cdots </math> [[File:Gamma distribution median Lyon bounds.png|320px|thumb| Two gamma distribution median asymptotes which were proved in 2023 to be bounds (upper solid red and lower dashed red), of the from <math>\nu(\alpha) \approx 2^{-1/\alpha}(A + \alpha)</math>, and an interpolation between them that makes an approximation (dotted red) that is exact at {{math|1=''α'' = 1}} and has maximum relative error of about 0.6%. The cyan shaded region is the remaining gap between upper and lower bounds (or conjectured bounds), including these new bounds and the bounds in the previous figure.]] [[File:Gamma distribution median loglog bounds.png|thumb|320px|[[Log–log plot]] of upper (solid) and lower (dashed) bounds to the median of a gamma distribution and the gaps between them. The green, yellow, and cyan regions represent the gap before the Lyon 2021 paper. The green and yellow narrow that gap with the lower bounds that Lyon proved. Lyon's bounds proved in 2023 further narrow the yellow. Mostly within the yellow, closed-form rational-function-interpolated conjectured bounds are plotted along with the numerically calculated median (dotted) value. Tighter interpolated bounds exist but are not plotted, as they would not be resolved at this scale.]] Partial sums of these series are good approximations for high enough {{mvar|α}}; they are not plotted in the figure, which is focused on the low-{{mvar|α}} region that is less well approximated. Berg and Pedersen also proved many properties of the median, showing that it is a convex function of {{mvar|α}},<ref name="Berg">Berg, Christian and Pedersen, Henrik L. [https://arxiv.org/abs/math/0609442 "Convexity of the median in the gamma distribution"] {{Webarchive|url=https://web.archive.org/web/20230526181721/https://arxiv.org/abs/math/0609442 |date=2023-05-26 }}.</ref> and that the asymptotic behavior near <math>\alpha = 0</math> is <math>\nu(\alpha) \approx e^{-\gamma}2^{-1/\alpha}</math> (where {{mvar|γ}} is the [[Euler–Mascheroni constant]]), and that for all <math>\alpha > 0</math> the median is bounded by <math>\alpha 2^{-1/\alpha} < \nu(\alpha) < k e^{-1/3k}</math>.<ref name="Pedersen, Henrik L.-2006"/> A closer linear upper bound, for <math>\alpha \ge 1</math> only, was provided in 2021 by Gaunt and Merkle,<ref>{{cite journal |last1=Gaunt, Robert E., and Milan Merkle |title=On bounds for the mode and median of the generalized hyperbolic and related distributions |journal=Journal of Mathematical Analysis and Applications |date=2021 |volume=493 |issue=1 |pages=124508|doi=10.1016/j.jmaa.2020.124508 |arxiv=2002.01884 |s2cid=221103640 }}</ref> relying on the Berg and Pedersen result that the slope of <math>\nu(\alpha)</math> is everywhere less than 1: <math display=block> \nu(\alpha) \le \alpha - 1 + \log2 ~~</math> for <math>\alpha \ge 1</math> (with equality at <math>\alpha = 1</math>) which can be extended to a bound for all <math>\alpha > 0</math> by taking the max with the chord shown in the figure, since the median was proved convex.<ref name="Berg"/> An approximation to the median that is asymptotically accurate at high {{mvar|α}} and reasonable down to <math>\alpha = 0.5</math> or a bit lower follows from the [[Wilson–Hilferty transformation]]: <math display=block> \nu(\alpha) = \alpha \left( 1 - \frac{1}{9\alpha} \right)^3 </math> which goes negative for <math>\alpha < 1/9</math>. In 2021, Lyon proposed several approximations of the form <math>\nu(\alpha) \approx 2^{-1/\alpha}(A + B\alpha)</math>. He conjectured values of {{mvar|A}} and {{mvar|B}} for which this approximation is an asymptotically tight upper or lower bound for all <math>\alpha > 0</math>.<ref name="Lyon-2021a">{{cite journal |last1=Lyon |first1=Richard F. |title=On closed-form tight bounds and approximations for the median of a gamma distribution |journal=[[PLOS One]] |date=13 May 2021 |volume=16 |issue=5 |pages=e0251626 |doi=10.1371/journal.pone.0251626 |pmid=33984053 |pmc=8118309 |arxiv=2011.04060 |bibcode=2021PLoSO..1651626L |doi-access=free }}</ref> In particular, he proposed these closed-form bounds, which he proved in 2023:<ref name="Lyon-2021b">{{cite journal |last1=Lyon |first1=Richard F. |title=Tight bounds for the median of a gamma distribution |journal=[[PLOS One]] |date=13 May 2021 |volume=18 |issue=9 |pages=e0288601 |doi=10.1371/journal.pone.0288601 |pmid=37682854 |pmc=10490949 |doi-access=free }}</ref> <math display=block> \nu_{L\infty}(\alpha) = 2^{-1/\alpha}(\log 2 - \frac{1}{3} + \alpha) \quad</math> is a lower bound, asymptotically tight as <math>\alpha \to \infty</math> <math display=block> \nu_U(\alpha) = 2^{-1/\alpha}(e^{-\gamma} + \alpha) \quad</math> is an upper bound, asymptotically tight as <math>\alpha \to 0</math> Lyon also showed (informally in 2021, rigorously in 2023) two other lower bounds that are not [[closed-form expression]]s, including this one involving the [[gamma function]], based on solving the integral expression substituting 1 for <math>e^{-x}</math>: <math display=block>\nu(\alpha) > \left( \frac{2}{\Gamma(\alpha+1)} \right)^{-1/\alpha} \quad</math> (approaching equality as <math>k \to 0</math>) and the tangent line at <math>\alpha = 1</math> where the derivative was found to be <math>\nu^\prime(1) \approx 0.9680448</math>: <math display=block>\nu(\alpha) \ge \nu(1) + (\alpha-1) \nu^\prime(1) \quad</math> (with equality at <math>k = 1</math>) <math display=block>\nu(\alpha) \ge \log 2 + (\alpha-1) (\gamma - 2 \operatorname{Ei}(-\log 2) - \log \log 2)</math> where Ei is the [[exponential integral]].<ref name="Lyon-2021a"/><ref name="Lyon-2021b"/> Additionally, he showed that interpolations between bounds could provide excellent approximations or tighter bounds to the median, including an approximation that is exact at <math>\alpha = 1</math> (where <math>\nu(1) = \log 2</math>) and has a maximum relative error less than 0.6%. Interpolated approximations and bounds are all of the form <math display=block>\nu(\alpha) \approx \tilde{g}(\alpha)\nu_{L\infty}(\alpha) + (1 - \tilde{g}(\alpha)) \nu_U(\alpha)</math> where <math>\tilde{g}</math> is an interpolating function running monotonially from 0 at low {{mvar|α}} to 1 at high {{mvar|α}}, approximating an ideal, or exact, interpolator <math>g(\alpha)</math>: <math display=block>g(\alpha) = \frac{\nu_U(\alpha) - \nu(\alpha)}{\nu_U(\alpha) - \nu_{L\infty}(\alpha)}</math> For the simplest interpolating function considered, a first-order rational function <math display=block>\tilde{g}_1(\alpha) = \frac{\alpha}{b_0 + \alpha}</math> the tightest lower bound has <math display=block>b_0 = \frac{\frac{8}{405} + e^{-\gamma} \log 2 - \frac{\log^2 2}{2}}{e^{-\gamma} - \log 2 + \frac{1}{3}} - \log 2 \approx 0.143472</math> and the tightest upper bound has <math display=block>b_0 = \frac{e^{-\gamma} - \log 2 + \frac{1}{3}}{1 - \frac{e^{-\gamma} \pi^2}{12}} \approx 0.374654</math> The interpolated bounds are plotted (mostly inside the yellow region) in the [[log–log plot]] shown. Even tighter bounds are available using different interpolating functions, but not usually with closed-form parameters like these.<ref name="Lyon-2021a"/> ===Summation=== If {{math|''X''<sub>''i''</sub>}} has a {{math|Gamma(''α''<sub>''i''</sub>, ''θ'')}} distribution for {{math|1=''i'' = 1, 2, ..., ''N''}} (i.e., all distributions have the same scale parameter {{mvar|θ}}), then <math display=block> \sum_{i=1}^N X_i \sim\mathrm{Gamma} \left( \sum_{i=1}^N \alpha_i, \theta \right)</math> provided all {{math|''X''<sub>''i''</sub>}} are [[statistical independence|independent]]. For the cases where the {{math|''X''<sub>''i''</sub>}} are [[statistical independence|independent]] but have different scale parameters, see Mathai <ref>{{Cite journal|last=Mathai|first=A. M.|title=Storage capacity of a dam with gamma type inputs|journal=Annals of the Institute of Statistical Mathematics|language=en|volume=34|issue=3|pages=591–597|doi=10.1007/BF02481056|issn=0020-3157|year=1982|s2cid=122537756}}</ref> or Moschopoulos.<ref>{{cite journal |first=P. G. |last=Moschopoulos |year=1985 |title=The distribution of the sum of independent gamma random variables |journal=Annals of the Institute of Statistical Mathematics |volume=37 |issue=3 |pages=541–544 |doi=10.1007/BF02481123 |s2cid=120066454 }}</ref> The gamma distribution exhibits [[Infinite divisibility (probability)|infinite divisibility]]. ===Scaling=== If <math display=block>X \sim \mathrm{Gamma}(\alpha, \theta),</math> then, for any {{math|''c'' > 0}}, <math display=block>cX \sim \mathrm{Gamma}(\alpha, c\,\theta),</math> by moment generating functions, or equivalently, if <math display=block>X \sim \mathrm{Gamma}\left( \alpha,\lambda \right)</math> (shape-rate parameterization) <math display=block>cX \sim \mathrm{Gamma}\left( \alpha, \frac \lambda c \right),</math> Indeed, we know that if {{mvar|X}} is an [[exponential distribution|exponential r.v.]] with rate {{mvar|λ}}, then {{math|''cX''}} is an exponential r.v. with rate {{math|''λ''/''c''}}; the same thing is valid with Gamma variates (and this can be checked using the [[moment-generating function]], see, e.g.,[http://www.stat.washington.edu/thompson/S341_10/Notes/week4.pdf these notes], 10.4-(ii)): multiplication by a positive constant {{mvar|c}} divides the rate (or, equivalently, multiplies the scale). ===Exponential family=== The gamma distribution is a two-parameter [[exponential family]] with [[natural parameters]] {{math|''α'' − 1}} and {{math|−1/''θ''}} (equivalently, {{math|''α'' − 1}} and {{math|−''λ''}}), and [[natural statistics]] {{mvar|X}} and {{math|ln ''X''}}. If the shape parameter {{mvar|α}} is held fixed, the resulting one-parameter family of distributions is a [[natural exponential family]]. ===Logarithmic expectation and variance=== One can show that <math display=block>\operatorname{E}[\ln X] = \psi(\alpha) - \ln \lambda</math> or equivalently, <math display=block>\operatorname{E}[\ln X] = \psi(\alpha) + \ln \theta</math> where {{mvar|ψ}} is the [[digamma function]]. Likewise, <math display="block">\operatorname{var}[\ln X] = \psi^{(1)}(\alpha)</math> where <math>\psi^{(1)}</math> is the [[trigamma function]]. This can be derived using the [[exponential family]] formula for the [[exponential family#Moment generating function of the sufficient statistic|moment generating function of the sufficient statistic]], because one of the sufficient statistics of the gamma distribution is {{math|ln ''x''}}. ===Information entropy=== The [[information entropy]] is <math display=block> \begin{align} \operatorname{H}(X) & = \operatorname{E}[-\ln p(X)] \\[4pt] & = \operatorname{E}[-\alpha \ln \lambda + \ln \Gamma(\alpha) - (\alpha-1)\ln X + \lambda X] \\[4pt] & = \alpha - \ln \lambda + \ln \Gamma(\alpha) + (1-\alpha)\psi(\alpha). \end{align} </math> In the {{mvar|α}}, {{mvar|θ}} parameterization, the [[information entropy]] is given by <math display=block>\operatorname{H}(X) =\alpha + \ln \theta + \ln \Gamma(\alpha) + (1-\alpha)\psi(\alpha).</math> ===Kullback–Leibler divergence=== [[Image:Gamma-KL-3D.png|thumb|right|320px|Illustration of the Kullback–Leibler (KL) divergence for two gamma PDFs. Here {{math|1=''λ'' = ''λ''<sub>0</sub> + 1}} which are set to {{math|1, 2, 3, 4, 5,}} and {{math|6}}. The typical asymmetry for the KL divergence is clearly visible.]] The [[Kullback–Leibler divergence]] (KL-divergence), of {{math|Gamma(''α''<sub>''p''</sub>, ''λ''<sub>''p''</sub>)}} ("true" distribution) from {{math|Gamma(''α''<sub>''q''</sub>, ''λ''<sub>''q''</sub>)}} ("approximating" distribution) is given by<ref>{{cite web|first=W. D.|last=Penny|url=https://www.fil.ion.ucl.ac.uk/~wpenny/publications/densities.ps|title= KL-Divergences of Normal, Gamma, Dirichlet, and Wishart densities}}</ref> <math display=block> \begin{align} D_{\mathrm{KL}}(\alpha_p,\lambda_p; \alpha_q, \lambda_q) = {} & (\alpha_p-\alpha_q) \psi(\alpha_p) - \log\Gamma(\alpha_p) + \log\Gamma(\alpha_q) \\ & {} + \alpha_q(\log \lambda_p - \log \lambda_q) + \alpha_p\frac{\lambda_q-\lambda_p}{\lambda_p}. \end{align} </math> Written using the {{mvar|α}}, {{mvar|θ}} parameterization, the KL-divergence of {{math|Gamma(''α''<sub>''p''</sub>, ''θ''<sub>''p''</sub>)}} from {{math|Gamma(''α''<sub>''q''</sub>, ''θ''<sub>''q''</sub>)}} is given by <math display=block> \begin{align} D_{\mathrm{KL}}(\alpha_p,\theta_p; \alpha_q, \theta_q) = {} & (\alpha_p-\alpha_q)\psi(\alpha_p) - \log\Gamma(\alpha_p) + \log\Gamma(\alpha_q) \\ & {} + \alpha_q(\log \theta_q - \log \theta_p) + \alpha_p \frac{\theta_p - \theta_q}{\theta_q}. \end{align} </math> ===Laplace transform=== The [[Laplace transform]] of the gamma PDF, which is the [[moment-generating function]] of the gamma distribution, is <math display="block">F(s) = \operatorname E\left( e^{sX} \right) = \frac1 {(1 - \theta s)^\alpha} = \left( \frac\lambda{ \lambda - s} \right)^\alpha </math> (where <math display=inline>X</math> is a random variable with that distribution).
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)