Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Beta distribution
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Other moments=== ====Moment generating function==== It also follows<ref name=JKB /><ref name="Handbook of Beta Distribution" /> that the [[moment generating function]] is :<math>\begin{align} M_X(\alpha; \beta; t) &= \operatorname{E}\left[e^{tX}\right] \\[4pt] &= \int_0^1 e^{tx} f(x;\alpha,\beta)\,dx \\[4pt] &= {}_1F_1(\alpha; \alpha+\beta; t) \\[4pt] &= \sum_{n=0}^\infty \frac {\alpha^{(n)}} {(\alpha+\beta)^{(n)}}\frac {t^n}{n!}\\[4pt] &= 1 +\sum_{k=1}^{\infty} \left( \prod_{r=0}^{k-1} \frac{\alpha+r}{\alpha+\beta+r} \right) \frac{t^k}{k!}. \end{align}</math> In particular ''M''<sub>''X''</sub>(''Ξ±''; ''Ξ²''; 0) = 1. ====Higher moments==== Using the [[moment generating function]], the ''k''-th [[raw moment]] is given by<ref name=JKB/> the factor :<math>\prod_{r=0}^{k-1} \frac{\alpha+r}{\alpha+\beta+r} </math> multiplying the (exponential series) term <math>\left(\frac{t^k}{k!}\right)</math> in the series of the [[moment generating function]] :<math>\operatorname{E}[X^k]= \frac{\alpha^{(k)}}{(\alpha + \beta)^{(k)}} = \prod_{r=0}^{k-1} \frac{\alpha+r}{\alpha+\beta+r}</math> where (''x'')<sup>(''k'')</sup> is a [[Pochhammer symbol]] representing rising factorial. It can also be written in a recursive form as :<math>\operatorname{E}[X^k] = \frac{\alpha + k - 1}{\alpha + \beta + k - 1}\operatorname{E}[X^{k - 1}].</math> Since the moment generating function <math>M_X(\alpha; \beta; \cdot)</math> has a positive radius of convergence,{{cn|reason=proof that the radius of convergence is positive, not in Billingsley Section 30?|date=December 2024}} the beta distribution is [[Moment problem|determined by its moments]].<ref>{{cite book|last1=Billingsley|first1=Patrick|title=Probability and measure|date=1995|publisher=Wiley-Interscience|isbn=978-0-471-00710-4|edition=3rd|chapter=Section 30: The Method of Moments}}</ref> ====Moments of transformed random variables==== =====Moments of linearly transformed, product and inverted random variables===== One can also show the following expectations for a transformed random variable,<ref name=JKB/> where the random variable ''X'' is Beta-distributed with parameters ''Ξ±'' and ''Ξ²'': ''X'' ~ Beta(''Ξ±'', ''Ξ²''). The expected value of the variable 1 β ''X'' is the mirror-symmetry of the expected value based on ''X'': :<math>\begin{align} \operatorname{E}[1-X] &= \frac{\beta}{\alpha + \beta} \\ \operatorname{E}[X(1-X)] &= \operatorname{E}[(1-X)X] = \frac{\alpha\beta}{(\alpha+\beta)(\alpha+\beta+1)} \end{align}</math> Due to the mirror-symmetry of the probability density function of the beta distribution, the variances based on variables ''X'' and 1 β ''X'' are identical, and the covariance on ''X''(1 β ''X'' is the negative of the variance: :<math>\operatorname{var}[(1-X)]=\operatorname{var}[X] = -\operatorname{cov}[X,(1-X)]= \frac{\alpha \beta}{(\alpha + \beta)^2(\alpha + \beta + 1)}</math> These are the expected values for inverted variables, (these are related to the harmonic means, see {{section link||Harmonic mean}}): :<math>\begin{align} \operatorname{E} \left [\frac{1}{X} \right ] &= \frac{\alpha+\beta-1 }{\alpha -1 } && \text{ if } \alpha > 1\\ \operatorname{E}\left [\frac{1}{1-X} \right ] &=\frac{\alpha+\beta-1 }{\beta-1 } && \text{ if } \beta > 1 \end{align}</math> The following transformation by dividing the variable ''X'' by its mirror-image ''X''/(1 β ''X'') results in the expected value of the "inverted beta distribution" or [[beta prime distribution]] (also known as beta distribution of the second kind or [[Pearson distribution|Pearson's Type VI]]):<ref name=JKB/> :<math> \begin{align} \operatorname{E}\left[\frac{X}{1-X}\right] &=\frac{\alpha}{\beta - 1 } && \text{ if }\beta > 1\\ \operatorname{E}\left[\frac{1-X}{X}\right] &=\frac{\beta}{\alpha- 1 } && \text{ if }\alpha > 1 \end{align} </math> Variances of these transformed variables can be obtained by integration, as the expected values of the second moments centered on the corresponding variables: :<math>\operatorname{var} \left[\frac{1}{X} \right] =\operatorname{E}\left[\left(\frac{1}{X} - \operatorname{E}\left[\frac{1}{X} \right ] \right )^2\right]= \operatorname{var}\left [\frac{1-X}{X} \right ] =\operatorname{E} \left [\left (\frac{1-X}{X} - \operatorname{E}\left [\frac{1-X}{X} \right ] \right )^2 \right ]= \frac{\beta (\alpha+\beta-1)}{(\alpha -2)(\alpha-1)^2 } \text{ if }\alpha > 2</math> The following variance of the variable ''X'' divided by its mirror-image (''X''/(1β''X'') results in the variance of the "inverted beta distribution" or [[beta prime distribution]] (also known as beta distribution of the second kind or [[Pearson distribution|Pearson's Type VI]]):<ref name=JKB/> :<math>\operatorname{var} \left [\frac{1}{1-X} \right ] =\operatorname{E} \left [\left(\frac{1}{1-X} - \operatorname{E} \left [\frac{1}{1-X} \right ] \right)^2 \right ]=\operatorname{var} \left [\frac{X}{1-X} \right ] = \operatorname{E} \left [\left (\frac{X}{1-X} - \operatorname{E} \left [\frac{X}{1-X} \right ] \right )^2 \right ]= \frac{\alpha(\alpha+\beta-1)}{(\beta-2)(\beta-1)^2 } \text{ if }\beta > 2</math> The covariances are: :<math>\operatorname{cov}\left [\frac{1}{X},\frac{1}{1-X} \right ] = \operatorname{cov}\left[\frac{1-X}{X},\frac{X}{1-X} \right] =\operatorname{cov}\left[\frac{1}{X},\frac{X}{1-X}\right ] = \operatorname{cov}\left[\frac{1-X}{X},\frac{1}{1-X} \right] =\frac{\alpha+\beta-1}{(\alpha-1)(\beta-1) } \text{ if } \alpha, \beta > 1</math> These expectations and variances appear in the four-parameter Fisher information matrix ({{section link||Fisher information}}.) =====Moments of logarithmically transformed random variables===== [[File:Logit.svg|thumbnail|right|350px|Plot of logit(''X'') = ln(''X''/(1 β''X'')) (vertical axis) vs. ''X'' in the domain of 0 to 1 (horizontal axis). Logit transformations are interesting, as they usually transform various shapes (including J-shapes) into (usually skewed) bell-shaped densities over the logit variable, and they may remove the end singularities over the original variable]] Expected values for [[Logarithm transformation|logarithmic transformations]] (useful for [[maximum likelihood]] estimates, see {{section link||Parameter estimation, Maximum likelihood}}) are discussed in this section. The following logarithmic linear transformations are related to the geometric means ''G<sub>X</sub>'' and ''G''<sub>(1β''X'')</sub> (see {{section link||Geometric Mean}}): :<math>\begin{align} \operatorname{E}[\ln(X)] &= \psi(\alpha) - \psi(\alpha + \beta)= - \operatorname{E}\left[\ln \left (\frac{1}{X} \right )\right],\\ \operatorname{E}[\ln(1-X)] &=\psi(\beta) - \psi(\alpha + \beta)= - \operatorname{E} \left[\ln \left (\frac{1}{1-X} \right )\right]. \end{align}</math> Where the '''[[digamma function]]''' ''Ο''(''Ξ±'') is defined as the [[logarithmic derivative]] of the [[gamma function]]:<ref name=Abramowitz/> :<math>\psi(\alpha) = \frac{d \ln\Gamma(\alpha)}{d\alpha}</math> [[Logit]] transformations are interesting,<ref name=MacKay>{{cite book|last=MacKay|first=David|title=Information Theory, Inference and Learning Algorithms|year=2003| publisher=Cambridge University Press; First Edition |isbn=978-0521642989|bibcode=2003itil.book.....M}}</ref> as they usually transform various shapes (including J-shapes) into (usually skewed) bell-shaped densities over the logit variable, and they may remove the end singularities over the original variable: :<math>\begin{align} \operatorname{E}\left[\ln \left (\frac{X}{1-X} \right ) \right] &=\psi(\alpha) - \psi(\beta)= \operatorname{E}[\ln(X)] +\operatorname{E} \left[\ln \left (\frac{1}{1-X} \right) \right],\\ \operatorname{E}\left [\ln \left (\frac{1-X}{X} \right ) \right ] &=\psi(\beta) - \psi(\alpha)= - \operatorname{E} \left[\ln \left (\frac{X}{1-X} \right) \right] . \end{align}</math> Johnson<ref name=JohnsonLogInv>{{cite journal|last=Johnson|first=N.L.|title=Systems of frequency curves generated by methods of translation| journal=Biometrika|year=1949 |volume=36 |issue=1β2|pages=149β176|doi=10.1093/biomet/36.1-2.149|pmid=18132090|hdl=10338.dmlcz/135506|url=http://dml.cz/bitstream/handle/10338.dmlcz/135506/Kybernetika_39-2003-1_3.pdf}}</ref> considered the distribution of the [[logit]] β transformed variable ln(''X''/1 β ''X''), including its moment generating function and approximations for large values of the shape parameters. This transformation extends the finite support [0, 1] based on the original variable ''X'' to infinite support in both directions of the real line (ββ, +β). The logit of a beta variate has the [[logistic-beta distribution]]. Higher order logarithmic moments can be derived by using the representation of a beta distribution as a proportion of two gamma distributions and differentiating through the integral. They can be expressed in terms of higher order poly-gamma functions as follows: :<math>\begin{align} \operatorname{E} \left [\ln^2(X) \right ] &= (\psi(\alpha) - \psi(\alpha + \beta))^2+\psi_1(\alpha)-\psi_1(\alpha+\beta), \\ \operatorname{E} \left [\ln^2(1-X) \right ] &= (\psi(\beta) - \psi(\alpha + \beta))^2+\psi_1(\beta)-\psi_1(\alpha+\beta), \\ \operatorname{E} \left [\ln (X)\ln(1-X) \right ] &=(\psi(\alpha) - \psi(\alpha + \beta))(\psi(\beta) - \psi(\alpha + \beta)) -\psi_1(\alpha+\beta). \end{align}</math> therefore the [[variance]] of the logarithmic variables and [[covariance]] of ln(''X'') and ln(1β''X'') are: :<math>\begin{align} \operatorname{cov}[\ln(X), \ln(1-X)] &= \operatorname{E}\left[\ln(X)\ln(1-X)\right] - \operatorname{E}[\ln(X)]\operatorname{E}[\ln(1-X)] = -\psi_1(\alpha+\beta) \\ & \\ \operatorname{var}[\ln X] &= \operatorname{E}[\ln^2(X)] - (\operatorname{E}[\ln(X)])^2 \\ &= \psi_1(\alpha) - \psi_1(\alpha + \beta) \\ &= \psi_1(\alpha) + \operatorname{cov}[\ln(X), \ln(1-X)] \\ & \\ \operatorname{var}[\ln (1-X)] &= \operatorname{E}[\ln^2 (1-X)] - (\operatorname{E}[\ln (1-X)])^2 \\ &= \psi_1(\beta) - \psi_1(\alpha + \beta) \\ &= \psi_1(\beta) + \operatorname{cov}[\ln (X), \ln(1-X)] \end{align}</math> where the '''[[trigamma function]]''', denoted ''Ο''<sub>1</sub>(''Ξ±''), is the second of the [[polygamma function]]s, and is defined as the derivative of the [[digamma]] function: :<math>\psi_1(\alpha) = \frac{d^2\ln\Gamma(\alpha)}{d\alpha^2}= \frac{d \psi(\alpha)}{d\alpha}. </math> The variances and covariance of the logarithmically transformed variables ''X'' and (1 β ''X'') are different, in general, because the logarithmic transformation destroys the mirror-symmetry of the original variables ''X'' and (1 β ''X''), as the logarithm approaches negative infinity for the variable approaching zero. These logarithmic variances and covariance are the elements of the [[Fisher information]] matrix for the beta distribution. They are also a measure of the curvature of the log likelihood function (see section on Maximum likelihood estimation). The variances of the log inverse variables are identical to the variances of the log variables: :<math>\begin{align} \operatorname{var}\left[\ln \left (\frac{1}{X} \right ) \right] & =\operatorname{var}[\ln(X)] = \psi_1(\alpha) - \psi_1(\alpha + \beta), \\ \operatorname{var}\left[\ln \left (\frac{1}{1-X} \right ) \right] &=\operatorname{var}[\ln (1-X)] = \psi_1(\beta) - \psi_1(\alpha + \beta), \\ \operatorname{cov}\left[\ln \left (\frac{1}{X} \right), \ln \left (\frac{1}{1-X}\right ) \right] &=\operatorname{cov}[\ln(X),\ln(1-X)]= -\psi_1(\alpha + \beta).\end{align}</math> It also follows that the variances of the [[logit]]-transformed variables are :<math>\operatorname{var}\left[\ln \left (\frac{X}{1-X} \right )\right]=\operatorname{var}\left[\ln \left (\frac{1-X}{X} \right ) \right]=-\operatorname{cov}\left [\ln \left (\frac{X}{1-X} \right ), \ln \left (\frac{1-X}{X} \right ) \right]= \psi_1(\alpha) + \psi_1(\beta).</math>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)