Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Beta distribution
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Parameter estimation=== ====Method of moments==== =====Two unknown parameters===== Two unknown parameters (<math> (\hat{\alpha}, \hat{\beta})</math> of a beta distribution supported in the [0,1] interval) can be estimated, using the method of moments, with the first two moments (sample mean and sample variance) as follows. Let: : <math>\text{sample mean(X)}=\bar{x} = \frac{1}{N}\sum_{i=1}^N X_i</math> be the [[sample mean]] estimate and : <math>\text{sample variance(X)} =\bar{v} = \frac{1}{N-1}\sum_{i=1}^N (X_i - \bar{x})^2</math> be the [[sample variance]] estimate. The [[method of moments (statistics)|method-of-moments]] estimates of the parameters are :<math>\hat{\alpha} = \bar{x} \left(\frac{\bar{x} (1 - \bar{x})}{\bar{v}} - 1 \right),</math> if <math>\bar{v} <\bar{x}(1 - \bar{x}),</math> : <math>\hat{\beta} = (1-\bar{x}) \left(\frac{\bar{x} (1 - \bar{x})}{\bar{v}} - 1 \right),</math> if <math>\bar{v}<\bar{x}(1 - \bar{x}).</math> <!-- MLE's should be in this section too. Maybe I'll be back.... --> When the distribution is required over a known interval other than [0, 1] with random variable ''X'', say [''a'', ''c''] with random variable ''Y'', then replace <math>\bar{x}</math> with <math>\frac{\bar{y}-a}{c-a},</math> and <math>\bar{v}</math> with <math>\frac{\bar{v_Y}}{(c-a)^2}</math> in the above couple of equations for the shape parameters (see the "Four unknown parameters" section below),<ref>{{Cite web|url=https://www.itl.nist.gov/div898/handbook/eda/section3/eda366h.htm|title=1.3.6.6.17. Beta Distribution|website=www.itl.nist.gov}}</ref> where: : <math>\text{sample mean(Y)}=\bar{y} = \frac{1}{N}\sum_{i=1}^N Y_i</math> : <math>\text{sample variance(Y)} = \bar{v_Y} = \frac{1}{N-1}\sum_{i=1}^N (Y_i - \bar{y})^2</math> =====Four unknown parameters===== [[File:(alpha and beta) Parameter estimates vs. excess Kurtosis and (squared) Skewness Beta distribution - J. Rodal.png|thumb|Solutions for parameter estimates vs. (sample) excess Kurtosis and (sample) squared Skewness Beta distribution]] All four parameters (<math>\hat{\alpha}, \hat{\beta}, \hat{a}, \hat{c}</math> of a beta distribution supported in the [''a'', ''c''] interval, see section [[Beta distribution#Four parameters|"Alternative parametrizations, Four parameters"]]) can be estimated, using the method of moments developed by [[Karl Pearson]], by equating sample and population values of the first four central moments (mean, variance, skewness and excess kurtosis).<ref name=JKB/><ref name=Elderton1906/><ref name="Elderton and Johnson">{{cite book|last=Elderton|first=William Palin and Norman Lloyd Johnson|title=Systems of Frequency Curves|year=2009|publisher=Cambridge University Press|isbn=978-0521093361}}</ref> The excess kurtosis was expressed in terms of the square of the skewness, and the sample size ν = α + β, (see previous section [[Beta distribution#Kurtosis|"Kurtosis"]]) as follows: :<math>\text{excess kurtosis} =\frac{6}{3 + \nu}\left(\frac{(2 + \nu)}{4} (\text{skewness})^2 - 1\right)\text{ if (skewness)}^2-2< \text{excess kurtosis}< \tfrac{3}{2} (\text{skewness})^2</math> One can use this equation to solve for the sample size ν= α + β in terms of the square of the skewness and the excess kurtosis as follows:<ref name=Elderton1906/> :<math>\hat{\nu} = \hat{\alpha} + \hat{\beta} = 3\frac{(\text{sample excess kurtosis}) - (\text{sample skewness})^2+2}{\frac{3}{2} (\text{sample skewness})^2 - \text{(sample excess kurtosis)}}</math> :<math>\text{ if (sample skewness)}^2-2< \text{sample excess kurtosis}< \tfrac{3}{2} (\text{sample skewness})^2</math> This is the ratio (multiplied by a factor of 3) between the previously derived limit boundaries for the beta distribution in a space (as originally done by Karl Pearson<ref name=Pearson />) defined with coordinates of the square of the skewness in one axis and the excess kurtosis in the other axis (see {{section link||Kurtosis bounded by the square of the skewness}}): The case of zero skewness, can be immediately solved because for zero skewness, α = β and hence ν = 2α = 2β, therefore α = β = ν/2 : <math>\hat{\alpha} = \hat{\beta} = \frac{\hat{\nu}}{2}= \frac{\frac{3}{2}(\text{sample excess kurtosis}) +3}{- \text{(sample excess kurtosis)}}</math> : <math> \text{ if sample skewness}= 0 \text{ and } -2<\text{sample excess kurtosis}<0</math> (Excess kurtosis is negative for the beta distribution with zero skewness, ranging from -2 to 0, so that <math>\hat{\nu}</math> -and therefore the sample shape parameters- is positive, ranging from zero when the shape parameters approach zero and the excess kurtosis approaches -2, to infinity when the shape parameters approach infinity and the excess kurtosis approaches zero). For non-zero sample skewness one needs to solve a system of two coupled equations. Since the skewness and the excess kurtosis are independent of the parameters <math>\hat{a}, \hat{c}</math>, the parameters <math>\hat{\alpha}, \hat{\beta}</math> can be uniquely determined from the sample skewness and the sample excess kurtosis, by solving the coupled equations with two known variables (sample skewness and sample excess kurtosis) and two unknowns (the shape parameters): :<math>(\text{sample skewness})^2 = \frac{4(\hat{\beta}-\hat{\alpha})^2 (1 + \hat{\alpha} + \hat{\beta})}{\hat{\alpha} \hat{\beta} (2 + \hat{\alpha} + \hat{\beta})^2}</math> :<math>\text{sample excess kurtosis} =\frac{6}{3 + \hat{\alpha} + \hat{\beta}}\left(\frac{(2 + \hat{\alpha} + \hat{\beta})}{4} (\text{sample skewness})^2 - 1\right)</math> :<math>\text{ if (sample skewness)}^2-2< \text{sample excess kurtosis}< \tfrac{3}{2}(\text{sample skewness})^2</math> resulting in the following solution:<ref name=Elderton1906/> : <math>\hat{\alpha}, \hat{\beta} = \frac{\hat{\nu}}{2} \left (1 \pm \frac{1}{ \sqrt{1+ \frac{16 (\hat{\nu} + 1)}{(\hat{\nu} + 2)^2(\text{sample skewness})^2}}} \right )</math> : <math>\text{ if sample skewness}\neq 0 \text{ and } (\text{sample skewness})^2-2< \text{sample excess kurtosis}< \tfrac{3}{2} (\text{sample skewness})^2</math> Where one should take the solutions as follows: <math>\hat{\alpha}>\hat{\beta}</math> for (negative) sample skewness < 0, and <math>\hat{\alpha}<\hat{\beta}</math> for (positive) sample skewness > 0. The accompanying plot shows these two solutions as surfaces in a space with horizontal axes of (sample excess kurtosis) and (sample squared skewness) and the shape parameters as the vertical axis. The surfaces are constrained by the condition that the sample excess kurtosis must be bounded by the sample squared skewness as stipulated in the above equation. The two surfaces meet at the right edge defined by zero skewness. Along this right edge, both parameters are equal and the distribution is symmetric U-shaped for α = β < 1, uniform for α = β = 1, upside-down-U-shaped for 1 < α = β < 2 and bell-shaped for α = β > 2. The surfaces also meet at the front (lower) edge defined by "the impossible boundary" line (excess kurtosis + 2 - skewness<sup>2</sup> = 0). Along this front (lower) boundary both shape parameters approach zero, and the probability density is concentrated more at one end than the other end (with practically nothing in between), with probabilities <math>p=\tfrac{\beta}{\alpha + \beta}</math> at the left end ''x'' = 0 and <math>q = 1-p = \tfrac{\alpha}{\alpha + \beta} </math> at the right end ''x'' = 1. The two surfaces become further apart towards the rear edge. At this rear edge the surface parameters are quite different from each other. As remarked, for example, by Bowman and Shenton,<ref name="BowmanShenton">{{cite journal|last=Bowman|first=K. O.|author1-link=Kimiko O. Bowman|author2=Shenton, L. R.|title=The beta distribution, moment method, Karl Pearson and R.A. Fisher|journal=Far East J. Theo. Stat.|year=2007|volume=23|issue=2|pages=133–164| url=http://www.csm.ornl.gov/~bowman/fjts232.pdf }}</ref> sampling in the neighborhood of the line (sample excess kurtosis - (3/2)(sample skewness)<sup>2</sup> = 0) (the just-J-shaped portion of the rear edge where blue meets beige), "is dangerously near to chaos", because at that line the denominator of the expression above for the estimate ν = α + β becomes zero and hence ν approaches infinity as that line is approached. Bowman and Shenton <ref name="BowmanShenton" /> write that "the higher moment parameters (kurtosis and skewness) are extremely fragile (near that line). However, the mean and standard deviation are fairly reliable." Therefore, the problem is for the case of four parameter estimation for very skewed distributions such that the excess kurtosis approaches (3/2) times the square of the skewness. This boundary line is produced by extremely skewed distributions with very large values of one of the parameters and very small values of the other parameter. See {{section link||Kurtosis bounded by the square of the skewness}} for a numerical example and further comments about this rear edge boundary line (sample excess kurtosis - (3/2)(sample skewness)<sup>2</sup> = 0). As remarked by Karl Pearson himself <ref name=Pearson1936/> this issue may not be of much practical importance as this trouble arises only for very skewed J-shaped (or mirror-image J-shaped) distributions with very different values of shape parameters that are unlikely to occur much in practice). The usual skewed-bell-shape distributions that occur in practice do not have this parameter estimation problem. The remaining two parameters <math>\hat{a}, \hat{c}</math> can be determined using the sample mean and the sample variance using a variety of equations.<ref name="JKB"/><ref name=Elderton1906/> One alternative is to calculate the support interval range <math>(\hat{c}-\hat{a})</math> based on the sample variance and the sample kurtosis. For this purpose one can solve, in terms of the range <math>(\hat{c}- \hat{a})</math>, the equation expressing the excess kurtosis in terms of the sample variance, and the sample size ν (see {{section link||Kurtosis }} and {{section link||Alternative parametrizations, four parameters}}): :<math>\text{sample excess kurtosis} =\frac{6}{(3 + \hat{\nu})(2 + \hat{\nu})}\bigg(\frac{(\hat{c}- \hat{a})^2}{\text{(sample variance)}} - 6 - 5 \hat{\nu} \bigg)</math> to obtain: :<math> (\hat{c}- \hat{a}) = \sqrt{\text{(sample variance)}}\sqrt{6+5\hat{\nu}+\frac{(2+\hat{\nu})(3+\hat{\nu})}{6}\text{(sample excess kurtosis)}}</math> Another alternative is to calculate the support interval range <math>(\hat{c}-\hat{a})</math> based on the sample variance and the sample skewness.<ref name=Elderton1906/> For this purpose one can solve, in terms of the range <math>(\hat{c}-\hat{a})</math>, the equation expressing the squared skewness in terms of the sample variance, and the sample size ν (see section titled "Skewness" and "Alternative parametrizations, four parameters"): :<math>(\text{sample skewness})^2 = \frac{4}{(2+\hat{\nu})^2}\bigg(\frac{(\hat{c}- \hat{a})^2}{ \text{(sample variance)}}-4(1+\hat{\nu})\bigg)</math> to obtain:<ref name=Elderton1906/> :<math> (\hat{c}- \hat{a}) = \frac{\sqrt{\text{(sample variance)}}}{2}\sqrt{(2+\hat{\nu})^2(\text{sample skewness})^2+16(1+\hat{\nu})}</math> The remaining parameter can be determined from the sample mean and the previously obtained parameters: <math>(\hat{c}-\hat{a}), \hat{\alpha}, \hat{\nu} = \hat{\alpha}+\hat{\beta}</math>: :<math> \hat{a} = (\text{sample mean}) - \left(\frac{\hat{\alpha}}{\hat{\nu}}\right)(\hat{c}-\hat{a}) </math> and finally, <math>\hat{c}= (\hat{c}- \hat{a}) + \hat{a} </math>. In the above formulas one may take, for example, as estimates of the sample moments: :<math>\begin{align} \text{sample mean} &=\overline{y} = \frac{1}{N}\sum_{i=1}^N Y_i \\ \text{sample variance} &= \overline{v}_Y = \frac{1}{N-1}\sum_{i=1}^N (Y_i - \overline{y})^2 \\ \text{sample skewness} &= G_1 = \frac{N}{(N-1)(N-2)} \frac{\sum_{i=1}^N (Y_i-\overline{y})^3}{\overline{v}_Y^{\frac{3}{2}} } \\ \text{sample excess kurtosis} &= G_2 = \frac{N(N+1)}{(N-1)(N-2)(N-3)} \frac{\sum_{i=1}^N (Y_i - \overline{y})^4}{\overline{v}_Y^2} - \frac{3(N-1)^2}{(N-2)(N-3)} \end{align}</math> The estimators ''G''<sub>1</sub> for [[skewness|sample skewness]] and ''G''<sub>2</sub> for [[kurtosis|sample kurtosis]] are used by [[DAP (software)|DAP]]/[[SAS System|SAS]], [[PSPP]]/[[SPSS]], and [[Microsoft Excel|Excel]]. However, they are not used by [[BMDP]] and (according to <ref name="Joanes and Gill"/>) they were not used by [[MINITAB]] in 1998. Actually, Joanes and Gill in their 1998 study<ref name="Joanes and Gill">{{cite journal|last=Joanes|first=D. N.|author2=C. A. Gill|title=Comparing measures of sample skewness and kurtosis|journal=The Statistician|year=1998|volume=47|issue=Part 1|pages=183–189|doi=10.1111/1467-9884.00122}}</ref> concluded that the skewness and kurtosis estimators used in [[BMDP]] and in [[MINITAB]] (at that time) had smaller variance and mean-squared error in normal samples, but the skewness and kurtosis estimators used in [[DAP (software)|DAP]]/[[SAS System|SAS]], [[PSPP]]/[[SPSS]], namely ''G''<sub>1</sub> and ''G''<sub>2</sub>, had smaller mean-squared error in samples from a very skewed distribution. It is for this reason that we have spelled out "sample skewness", etc., in the above formulas, to make it explicit that the user should choose the best estimator according to the problem at hand, as the best estimator for skewness and kurtosis depends on the amount of skewness (as shown by Joanes and Gill<ref name="Joanes and Gill"/>). ====Maximum likelihood==== =====Two unknown parameters===== [[File:Max (Joint Log Likelihood per N) for Beta distribution Maxima at alpha=beta=2 - J. Rodal.png|thumb|Max (joint log likelihood/''N'') for beta distribution maxima at ''α'' = ''β'' = 2]] [[File:Max (Joint Log Likelihood per N) for Beta distribution Maxima at alpha=beta= 0.25,0.5,1,2,4,6,8 - J. Rodal.png|thumb|Max (joint log likelihood/''N'') for Beta distribution maxima at ''α'' = ''β'' ∈ {0.25,0.5,1,2,4,6,8}]] As is also the case for [[maximum likelihood]] estimates for the [[gamma distribution]], the maximum likelihood estimates for the beta distribution do not have a general closed form solution for arbitrary values of the shape parameters. If ''X''<sub>1</sub>, ..., ''X<sub>N</sub>'' are independent random variables each having a beta distribution, the joint log likelihood function for ''N'' [[independent and identically distributed random variables|iid]] observations is: :<math>\begin{align} \ln\, \mathcal{L} (\alpha, \beta\mid X) &= \sum_{i=1}^N \ln \left (\mathcal{L}_i (\alpha, \beta\mid X_i) \right )\\ &= \sum_{i=1}^N \ln \left (f(X_i;\alpha,\beta) \right ) \\ &= \sum_{i=1}^N \ln \left (\frac{X_i^{\alpha-1}(1-X_i)^{\beta-1}}{\Beta(\alpha,\beta)} \right ) \\ &= (\alpha - 1)\sum_{i=1}^N \ln (X_i) + (\beta- 1)\sum_{i=1}^N \ln (1-X_i) - N \ln \Beta(\alpha,\beta) \end{align}</math> Finding the maximum with respect to a shape parameter involves taking the partial derivative with respect to the shape parameter and setting the expression equal to zero yielding the [[maximum likelihood]] estimator of the shape parameters: :<math>\frac{\partial \ln \mathcal{L}(\alpha,\beta\mid X)}{\partial \alpha} = \sum_{i=1}^N \ln X_i -N\frac{\partial \ln \Beta(\alpha,\beta)}{\partial \alpha}=0</math> :<math>\frac{\partial \ln \mathcal{L}(\alpha,\beta\mid X)}{\partial \beta} = \sum_{i=1}^N \ln (1-X_i)- N\frac{\partial \ln \mathrm{B}(\alpha,\beta)}{\partial \beta}=0</math> where: :<math>\frac{\partial \ln \Beta(\alpha,\beta)}{\partial \alpha} = -\frac{\partial \ln \Gamma(\alpha+\beta)}{\partial \alpha}+ \frac{\partial \ln \Gamma(\alpha)}{\partial \alpha}+ \frac{\partial \ln \Gamma(\beta)}{\partial \alpha}=-\psi(\alpha + \beta) + \psi(\alpha) + 0</math> :<math>\frac{\partial \ln \Beta(\alpha,\beta)}{\partial \beta}= - \frac{\partial \ln \Gamma(\alpha+\beta)}{\partial \beta}+ \frac{\partial \ln \Gamma(\alpha)}{\partial \beta} + \frac{\partial \ln \Gamma(\beta)}{\partial \beta}=-\psi(\alpha + \beta) + 0 + \psi(\beta)</math> since the '''[[digamma function]]''' denoted ψ(α) is defined as the [[logarithmic derivative]] of the [[gamma function]]:<ref name=Abramowitz/> :<math>\psi(\alpha) =\frac {\partial\ln \Gamma(\alpha)}{\partial \alpha}</math> To ensure that the values with zero tangent slope are indeed a maximum (instead of a saddle-point or a minimum) one has to also satisfy the condition that the curvature is negative. This amounts to satisfying that the second partial derivative with respect to the shape parameters is negative :<math>\frac{\partial^2\ln \mathcal{L}(\alpha,\beta\mid X)}{\partial \alpha^2}= -N\frac{\partial^2\ln \Beta(\alpha,\beta)}{\partial \alpha^2}<0</math> :<math>\frac{\partial^2\ln \mathcal{L}(\alpha,\beta\mid X)}{\partial \beta^2} = -N\frac{\partial^2\ln \Beta(\alpha,\beta)}{\partial \beta^2}<0</math> using the previous equations, this is equivalent to: :<math>\frac{\partial^2\ln \Beta(\alpha,\beta)}{\partial \alpha^2} = \psi_1(\alpha)-\psi_1(\alpha + \beta) > 0</math> :<math>\frac{\partial^2\ln \Beta(\alpha,\beta)}{\partial \beta^2} = \psi_1(\beta) -\psi_1(\alpha + \beta) > 0</math> where the '''[[trigamma function]]''', denoted ''ψ''<sub>1</sub>(''α''), is the second of the [[polygamma function]]s, and is defined as the derivative of the [[digamma]] function: :<math>\psi_1(\alpha) = \frac{\partial^2\ln\Gamma(\alpha)}{\partial \alpha^2}=\, \frac{\partial\, \psi(\alpha)}{\partial \alpha}.</math> These conditions are equivalent to stating that the variances of the logarithmically transformed variables are positive, since: :<math>\operatorname{var}[\ln (X)] = \operatorname{E}[\ln^2 (X)] - (\operatorname{E}[\ln (X)])^2 = \psi_1(\alpha) - \psi_1(\alpha + \beta) </math> :<math>\operatorname{var}[\ln (1-X)] = \operatorname{E}[\ln^2 (1-X)] - (\operatorname{E}[\ln (1-X)])^2 = \psi_1(\beta) - \psi_1(\alpha + \beta) </math> Therefore, the condition of negative curvature at a maximum is equivalent to the statements: : <math> \operatorname{var}[\ln (X)] > 0</math> : <math> \operatorname{var}[\ln (1-X)] > 0</math> Alternatively, the condition of negative curvature at a maximum is also equivalent to stating that the following [[logarithmic derivative]]s of the [[geometric mean]]s ''G<sub>X</sub>'' and ''G<sub>(1−X)</sub>'' are positive, since: : <math>\psi_1(\alpha) - \psi_1(\alpha + \beta) = \frac{\partial \ln G_X}{\partial \alpha} > 0</math> : <math>\psi_1(\beta) - \psi_1(\alpha + \beta) = \frac{\partial \ln G_{(1-X)}}{\partial \beta} > 0</math> While these slopes are indeed positive, the other slopes are negative: :<math>\frac{\partial\, \ln G_X}{\partial \beta}, \frac{\partial \ln G_{(1-X)}}{\partial \alpha} < 0.</math> The slopes of the mean and the median with respect to ''α'' and ''β'' display similar sign behavior. From the condition that at a maximum, the partial derivative with respect to the shape parameter equals zero, we obtain the following system of coupled [[maximum likelihood estimate]] equations (for the average log-likelihoods) that needs to be inverted to obtain the (unknown) shape parameter estimates <math>\hat{\alpha},\hat{\beta}</math> in terms of the (known) average of logarithms of the samples ''X''<sub>1</sub>, ..., ''X<sub>N</sub>'':<ref name=JKB /> :<math>\begin{align} \hat{\operatorname{E}}[\ln (X)] &= \psi(\hat{\alpha}) - \psi(\hat{\alpha} + \hat{\beta})=\frac{1}{N}\sum_{i=1}^N \ln X_i = \ln \hat{G}_X \\ \hat{\operatorname{E}}[\ln(1-X)] &= \psi(\hat{\beta}) - \psi(\hat{\alpha} + \hat{\beta})=\frac{1}{N}\sum_{i=1}^N \ln (1-X_i)= \ln \hat{G}_{(1-X)} \end{align}</math> where we recognize <math>\log \hat{G}_X</math> as the logarithm of the sample [[geometric mean]] and <math>\log \hat{G}_{(1-X)}</math> as the logarithm of the sample [[geometric mean]] based on (1 − ''X''), the mirror-image of ''X''. For <math>\hat{\alpha}=\hat{\beta}</math>, it follows that <math>\hat{G}_X=\hat{G}_{(1-X)} </math>. :<math>\begin{align} \hat{G}_X &= \prod_{i=1}^N (X_i)^{1/N} \\ \hat{G}_{(1-X)} &= \prod_{i=1}^N (1-X_i)^{1/N} \end{align}</math> These coupled equations containing [[digamma function]]s of the shape parameter estimates <math>\hat{\alpha},\hat{\beta}</math> must be solved by numerical methods as done, for example, by Beckman et al.<ref>{{cite journal|last=Beckman|first=R. J.|author2=G. L. Tietjen|title=Maximum likelihood estimation for the beta distribution|journal=Journal of Statistical Computation and Simulation|year=1978|volume=7|issue=3–4|pages=253–258|doi=10.1080/00949657808810232}}</ref> Gnanadesikan et al. give numerical solutions for a few cases.<ref>{{cite journal |last=Gnanadesikan |first=R., Pinkham and Hughes|title=Maximum likelihood estimation of the parameters of the beta distribution from smallest order statistics |journal=Technometrics |year=1967|volume=9|issue=4|pages=607–620 |doi=10.2307/1266199|jstor=1266199}}</ref> [[Norman Lloyd Johnson|N.L.Johnson]] and [[Samuel Kotz|S.Kotz]]<ref name=JKB /> suggest that for "not too small" shape parameter estimates <math>\hat{\alpha},\hat{\beta}</math>, the logarithmic approximation to the digamma function <math>\psi(\hat{\alpha}) \approx \ln(\hat{\alpha}-\tfrac{1}{2})</math> may be used to obtain initial values for an iterative solution, since the equations resulting from this approximation can be solved exactly: :<math>\ln \frac{\hat{\alpha} - \frac{1}{2}}{\hat{\alpha} + \hat{\beta} - \frac{1}{2}} \approx \ln \hat{G}_X </math> :<math>\ln \frac{\hat{\beta} - \frac{1}{2}}{\hat{\alpha} + \hat{\beta} - \frac{1}{2}}\approx \ln \hat{G}_{(1-X)} </math> which leads to the following solution for the initial values (of the estimate shape parameters in terms of the sample geometric means) for an iterative solution: :<math>\hat{\alpha}\approx \tfrac{1}{2} + \frac{\hat{G}_{X}}{2(1-\hat{G}_X-\hat{G}_{(1-X)})} \text{ if } \hat{\alpha} >1</math> :<math>\hat{\beta}\approx \tfrac{1}{2} + \frac{\hat{G}_{(1-X)}}{2(1-\hat{G}_X-\hat{G}_{(1-X)})} \text{ if } \hat{\beta} > 1</math> Alternatively, the estimates provided by the method of moments can instead be used as initial values for an iterative solution of the maximum likelihood coupled equations in terms of the digamma functions. When the distribution is required over a known interval other than [0, 1] with random variable ''X'', say [''a'', ''c''] with random variable ''Y'', then replace ln(''X<sub>i</sub>'') in the first equation with :<math>\ln \frac{Y_i-a}{c-a}</math>, and replace ln(1−''X<sub>i</sub>'') in the second equation with :<math>\ln \frac{c-Y_i}{c-a}</math> (see "Alternative parametrizations, four parameters" section below). If one of the shape parameters is known, the problem is considerably simplified. The following [[logit]] transformation can be used to solve for the unknown shape parameter (for skewed cases such that <math>\hat{\alpha}\neq\hat{\beta}</math>, otherwise, if symmetric, both -equal- parameters are known when one is known): :<math>\hat{\operatorname{E}} \left[\ln \left(\frac{X}{1-X} \right) \right]=\psi(\hat{\alpha}) - \psi(\hat{\beta})=\frac{1}{N}\sum_{i=1}^N \ln\frac{X_i}{1-X_i} = \ln \hat{G}_X - \ln \left(\hat{G}_{(1-X)}\right) </math> This [[logit]] transformation is the logarithm of the transformation that divides the variable ''X'' by its mirror-image (''X''/(1 - ''X'') resulting in the "inverted beta distribution" or [[beta prime distribution]] (also known as beta distribution of the second kind or [[Pearson distribution|Pearson's Type VI]]) with support [0, +∞). As previously discussed in the section "Moments of logarithmically transformed random variables," the [[logit]] transformation <math>\ln\frac{X}{1-X}</math>, studied by Johnson,<ref name=JohnsonLogInv/> extends the finite support [0, 1] based on the original variable ''X'' to infinite support in both directions of the real line (−∞, +∞). If, for example, <math>\hat{\beta}</math> is known, the unknown parameter <math>\hat{\alpha}</math> can be obtained in terms of the inverse<ref name=invpsi.m>{{cite web|last=Fackler |first=Paul|title=Inverse Digamma Function (Matlab)|url=http://hips.seas.harvard.edu/content/inverse-digamma-function-matlab|publisher=Harvard University School of Engineering and Applied Sciences|access-date=2012-08-18}}</ref> digamma function of the right hand side of this equation: :<math>\psi(\hat{\alpha})=\frac{1}{N}\sum_{i=1}^N \ln\frac{X_i}{1-X_i} + \psi(\hat{\beta}) </math> :<math>\hat{\alpha}=\psi^{-1}(\ln \hat{G}_X - \ln \hat{G}_{(1-X)} + \psi(\hat{\beta})) </math> In particular, if one of the shape parameters has a value of unity, for example for <math>\hat{\beta} = 1</math> (the power function distribution with bounded support [0,1]), using the identity ψ(''x'' + 1) = ψ(''x'') + 1/''x'' in the equation <math>\psi(\hat{\alpha}) - \psi(\hat{\alpha} + \hat{\beta})= \ln \hat{G}_X</math>, the maximum likelihood estimator for the unknown parameter <math>\hat{\alpha}</math> is,<ref name=JKB /> exactly: :<math>\hat{\alpha}= - \frac{1}{\frac{1}{N}\sum_{i=1}^N \ln X_i}= - \frac{1}{ \ln \hat{G}_X} </math> The beta has support [0, 1], therefore <math>\hat{G}_X < 1</math>, and hence <math>(-\ln \hat{G}_X) >0</math>, and therefore <math>\hat{\alpha} >0.</math> In conclusion, the maximum likelihood estimates of the shape parameters of a beta distribution are (in general) a complicated function of the sample [[geometric mean]], and of the sample [[geometric mean]] based on ''(1−X)'', the mirror-image of ''X''. One may ask, if the variance (in addition to the mean) is necessary to estimate two shape parameters with the method of moments, why is the (logarithmic or geometric) variance not necessary to estimate two shape parameters with the maximum likelihood method, for which only the geometric means suffice? The answer is because the mean does not provide as much information as the geometric mean. For a beta distribution with equal shape parameters ''α'' = ''β'', the mean is exactly 1/2, regardless of the value of the shape parameters, and therefore regardless of the value of the statistical dispersion (the variance). On the other hand, the geometric mean of a beta distribution with equal shape parameters ''α'' = ''β'', depends on the value of the shape parameters, and therefore it contains more information. Also, the geometric mean of a beta distribution does not satisfy the symmetry conditions satisfied by the mean, therefore, by employing both the geometric mean based on ''X'' and geometric mean based on (1 − ''X''), the maximum likelihood method is able to provide best estimates for both parameters ''α'' = ''β'', without need of employing the variance. One can express the joint log likelihood per ''N'' [[independent and identically distributed random variables|iid]] observations in terms of the ''[[sufficient statistic]]s'' (the sample geometric means) as follows: :<math>\frac{\ln \mathcal{L} (\alpha, \beta\mid X)}{N} = (\alpha - 1)\ln \hat{G}_X + (\beta- 1)\ln \hat{G}_{(1-X)}- \ln \Beta(\alpha,\beta).</math> We can plot the joint log likelihood per ''N'' observations for fixed values of the sample geometric means to see the behavior of the likelihood function as a function of the shape parameters α and β. In such a plot, the shape parameter estimators <math>\hat{\alpha},\hat{\beta}</math> correspond to the maxima of the likelihood function. See the accompanying graph that shows that all the likelihood functions intersect at α = β = 1, which corresponds to the values of the shape parameters that give the maximum entropy (the maximum entropy occurs for shape parameters equal to unity: the uniform distribution). It is evident from the plot that the likelihood function gives sharp peaks for values of the shape parameter estimators close to zero, but that for values of the shape parameters estimators greater than one, the likelihood function becomes quite flat, with less defined peaks. Obviously, the maximum likelihood parameter estimation method for the beta distribution becomes less acceptable for larger values of the shape parameter estimators, as the uncertainty in the peak definition increases with the value of the shape parameter estimators. One can arrive at the same conclusion by noticing that the expression for the curvature of the likelihood function is in terms of the geometric variances :<math>\frac{\partial^2\ln \mathcal{L}(\alpha,\beta\mid X)}{\partial \alpha^2}= -\operatorname{var}[\ln X]</math> :<math>\frac{\partial^2\ln \mathcal{L}(\alpha,\beta\mid X)}{\partial \beta^2} = -\operatorname{var}[\ln (1-X)]</math> These variances (and therefore the curvatures) are much larger for small values of the shape parameter α and β. However, for shape parameter values α, β > 1, the variances (and therefore the curvatures) flatten out. Equivalently, this result follows from the [[Cramér–Rao bound]], since the [[Fisher information]] matrix components for the beta distribution are these logarithmic variances. The [[Cramér–Rao bound]] states that the [[variance]] of any ''unbiased'' estimator <math>\hat{\alpha}</math> of α is bounded by the [[multiplicative inverse|reciprocal]] of the [[Fisher information]]: :<math>\mathrm{var}(\hat{\alpha})\geq\frac{1}{\operatorname{var}[\ln X]}\geq\frac{1}{\psi_1(\hat{\alpha}) - \psi_1(\hat{\alpha} + \hat{\beta})}</math> :<math>\mathrm{var}(\hat{\beta}) \geq\frac{1}{\operatorname{var}[\ln (1-X)]}\geq\frac{1}{\psi_1(\hat{\beta}) - \psi_1(\hat{\alpha} + \hat{\beta})}</math> so the variance of the estimators increases with increasing α and β, as the logarithmic variances decrease. Also one can express the joint log likelihood per ''N'' [[independent and identically distributed random variables|iid]] observations in terms of the [[digamma function]] expressions for the logarithms of the sample geometric means as follows: :<math>\frac{\ln\, \mathcal{L} (\alpha, \beta\mid X)}{N} = (\alpha - 1)(\psi(\hat{\alpha}) - \psi(\hat{\alpha} + \hat{\beta}))+(\beta- 1)(\psi(\hat{\beta}) - \psi(\hat{\alpha} + \hat{\beta}))- \ln \Beta(\alpha,\beta)</math> this expression is identical to the negative of the cross-entropy (see section on "Quantities of information (entropy)"). Therefore, finding the maximum of the joint log likelihood of the shape parameters, per ''N'' [[independent and identically distributed random variables|iid]] observations, is identical to finding the minimum of the cross-entropy for the beta distribution, as a function of the shape parameters. :<math>\frac{\ln\, \mathcal{L} (\alpha, \beta\mid X)}{N} = - H = -h - D_{\mathrm{KL}} = -\ln\Beta(\alpha,\beta)+(\alpha-1)\psi(\hat{\alpha})+(\beta-1)\psi(\hat{\beta})-(\alpha+\beta-2)\psi(\hat{\alpha}+\hat{\beta})</math> with the cross-entropy defined as follows: :<math>H = \int_{0}^1 - f(X;\hat{\alpha},\hat{\beta}) \ln (f(X;\alpha,\beta)) \, {\rm d}X </math> =====Four unknown parameters===== The procedure is similar to the one followed in the two unknown parameter case. If ''Y''<sub>1</sub>, ..., ''Y<sub>N</sub>'' are independent random variables each having a beta distribution with four parameters, the joint log likelihood function for ''N'' [[independent and identically distributed random variables|iid]] observations is: :<math>\begin{align} \ln\, \mathcal{L} (\alpha, \beta, a, c\mid Y) &= \sum_{i=1}^N \ln\,\mathcal{L}_i (\alpha, \beta, a, c\mid Y_i)\\ &= \sum_{i=1}^N \ln\,f(Y_i; \alpha, \beta, a, c) \\ &= \sum_{i=1}^N \ln\,\frac{(Y_i-a)^{\alpha-1} (c-Y_i)^{\beta-1} }{(c-a)^{\alpha+\beta-1}\Beta(\alpha, \beta)}\\ &= (\alpha - 1)\sum_{i=1}^N \ln (Y_i - a) + (\beta- 1)\sum_{i=1}^N \ln (c - Y_i)- N \ln \Beta(\alpha,\beta) - N (\alpha+\beta - 1) \ln (c - a) \end{align}</math> Finding the maximum with respect to a shape parameter involves taking the partial derivative with respect to the shape parameter and setting the expression equal to zero yielding the [[maximum likelihood]] estimator of the shape parameters: :<math>\frac{\partial \ln \mathcal{L} (\alpha, \beta, a, c\mid Y) }{\partial \alpha}= \sum_{i=1}^N \ln (Y_i - a) - N(-\psi(\alpha + \beta) + \psi(\alpha))- N \ln (c - a)= 0</math> :<math>\frac{\partial \ln \mathcal{L} (\alpha, \beta, a, c\mid Y) }{\partial \beta} = \sum_{i=1}^N \ln (c - Y_i) - N(-\psi(\alpha + \beta) + \psi(\beta))- N \ln (c - a)= 0</math> :<math>\frac{\partial \ln \mathcal{L} (\alpha, \beta, a, c\mid Y) }{\partial a} = -(\alpha - 1) \sum_{i=1}^N \frac{1}{Y_i - a} \,+ N (\alpha+\beta - 1)\frac{1}{c - a}= 0</math> :<math>\frac{\partial \ln \mathcal{L} (\alpha, \beta, a, c\mid Y) }{\partial c} = (\beta- 1) \sum_{i=1}^N \frac{1}{c - Y_i} \,- N (\alpha+\beta - 1) \frac{1}{c - a} = 0</math> these equations can be re-arranged as the following system of four coupled equations (the first two equations are geometric means and the second two equations are the harmonic means) in terms of the maximum likelihood estimates for the four parameters <math>\hat{\alpha}, \hat{\beta}, \hat{a}, \hat{c}</math>: :<math>\frac{1}{N}\sum_{i=1}^N \ln \frac{Y_i - \hat{a}}{\hat{c}-\hat{a}} = \psi(\hat{\alpha})-\psi(\hat{\alpha} +\hat{\beta} )= \ln \hat{G}_X</math> :<math>\frac{1}{N}\sum_{i=1}^N \ln \frac{\hat{c} - Y_i}{\hat{c}-\hat{a}} = \psi(\hat{\beta})-\psi(\hat{\alpha} + \hat{\beta})= \ln \hat{G}_{1-X}</math> :<math>\frac{1}{\frac{1}{N}\sum_{i=1}^N \frac{\hat{c} - \hat{a}}{Y_i - \hat{a}}} = \frac{\hat{\alpha} - 1}{\hat{\alpha}+\hat{\beta} - 1}= \hat{H}_X</math> :<math>\frac{1}{\frac{1}{N}\sum_{i=1}^N \frac{\hat{c} - \hat{a}}{\hat{c} - Y_i}} = \frac{\hat{\beta}- 1}{\hat{\alpha}+\hat{\beta} - 1} = \hat{H}_{1-X}</math> with sample geometric means: :<math>\hat{G}_X = \prod_{i=1}^{N} \left (\frac{Y_i - \hat{a}}{\hat{c}-\hat{a}} \right )^{\frac{1}{N}}</math> :<math>\hat{G}_{(1-X)} = \prod_{i=1}^{N} \left (\frac{\hat{c} - Y_i}{\hat{c}-\hat{a}} \right )^{\frac{1}{N}}</math> The parameters <math>\hat{a}, \hat{c}</math> are embedded inside the geometric mean expressions in a nonlinear way (to the power 1/''N''). This precludes, in general, a closed form solution, even for an initial value approximation for iteration purposes. One alternative is to use as initial values for iteration the values obtained from the method of moments solution for the four parameter case. Furthermore, the expressions for the harmonic means are well-defined only for <math>\hat{\alpha}, \hat{\beta} > 1</math>, which precludes a maximum likelihood solution for shape parameters less than unity in the four-parameter case. Fisher's information matrix for the four parameter case is [[Positive-definite matrix|positive-definite]] only for α, β > 2 (for further discussion, see section on Fisher information matrix, four parameter case), for bell-shaped (symmetric or unsymmetric) beta distributions, with inflection points located to either side of the mode. The following Fisher information components (that represent the expectations of the curvature of the log likelihood function) have [[mathematical singularity|singularities]] at the following values: :<math>\alpha = 2: \quad \operatorname{E} \left [- \frac{1}{N} \frac{\partial^2\ln \mathcal{L} (\alpha, \beta, a, c\mid Y)}{\partial a^2} \right ]= {\mathcal{I}}_{a, a}</math> :<math>\beta = 2: \quad \operatorname{E}\left [- \frac{1}{N} \frac{\partial^2\ln \mathcal{L} (\alpha, \beta, a, c\mid Y)}{\partial c^2} \right ] = {\mathcal{I}}_{c, c}</math> :<math>\alpha = 2: \quad \operatorname{E}\left [- \frac{1}{N}\frac{\partial^2\ln \mathcal{L} (\alpha, \beta, a, c\mid Y)}{\partial \alpha \partial a}\right ] = {\mathcal{I}}_{\alpha, a} </math> :<math>\beta = 1: \quad \operatorname{E}\left [- \frac{1}{N}\frac{\partial^2\ln \mathcal{L} (\alpha, \beta, a, c\mid Y)}{\partial \beta \partial c} \right ] = {\mathcal{I}}_{\beta, c} </math> (for further discussion see section on Fisher information matrix). Thus, it is not possible to strictly carry on the maximum likelihood estimation for some well known distributions belonging to the four-parameter beta distribution family, like the [[continuous uniform distribution|uniform distribution]] (Beta(1, 1, ''a'', ''c'')), and the [[arcsine distribution]] (Beta(1/2, 1/2, ''a'', ''c'')). [[Norman Lloyd Johnson|N.L.Johnson]] and [[Samuel Kotz|S.Kotz]]<ref name=JKB /> ignore the equations for the harmonic means and instead suggest "If a and c are unknown, and maximum likelihood estimators of ''a'', ''c'', α and β are required, the above procedure (for the two unknown parameter case, with ''X'' transformed as ''X'' = (''Y'' − ''a'')/(''c'' − ''a'')) can be repeated using a succession of trial values of ''a'' and ''c'', until the pair (''a'', ''c'') for which maximum likelihood (given ''a'' and ''c'') is as great as possible, is attained" (where, for the purpose of clarity, their notation for the parameters has been translated into the present notation). ====Fisher information matrix==== Let a random variable X have a probability density ''f''(''x'';''α''). The partial derivative with respect to the (unknown, and to be estimated) parameter α of the log [[likelihood function]] is called the [[score (statistics)|score]]. The second moment of the score is called the [[Fisher information]]: :<math>\mathcal{I}(\alpha)=\operatorname{E} \left [\left (\frac{\partial}{\partial\alpha} \ln \mathcal{L}(\alpha\mid X) \right )^2 \right],</math> The [[expected value|expectation]] of the [[score (statistics)|score]] is zero, therefore the Fisher information is also the second moment centered on the mean of the score: the [[variance]] of the score. If the log [[likelihood function]] is twice differentiable with respect to the parameter α, and under certain regularity conditions,<ref name=Silvey>{{cite book|last=Silvey|first=S.D.|title=Statistical Inference|year=1975|publisher=Chapman and Hal|page=40|isbn=978-0412138201}}</ref> then the Fisher information may also be written as follows (which is often a more convenient form for calculation purposes): :<math>\mathcal{I}(\alpha) = - \operatorname{E} \left [\frac{\partial^2}{\partial\alpha^2} \ln (\mathcal{L}(\alpha\mid X)) \right].</math> Thus, the Fisher information is the negative of the expectation of the second [[derivative]] with respect to the parameter α of the log [[likelihood function]]. Therefore, Fisher information is a measure of the [[curvature]] of the log likelihood function of α. A low [[curvature]] (and therefore high [[Radius of curvature (mathematics)|radius of curvature]]), flatter log likelihood function curve has low Fisher information; while a log likelihood function curve with large [[curvature]] (and therefore low [[Radius of curvature (mathematics)|radius of curvature]]) has high Fisher information. When the Fisher information matrix is computed at the evaluates of the parameters ("the observed Fisher information matrix") it is equivalent to the replacement of the true log likelihood surface by a Taylor's series approximation, taken as far as the quadratic terms.<ref name=EdwardsLikelihood>{{cite book|last=Edwards|first=A. W. F.|title=Likelihood|year=1992 |publisher=The Johns Hopkins University Press|isbn=978-0801844430}}</ref> The word information, in the context of Fisher information, refers to information about the parameters. Information such as: estimation, sufficiency and properties of variances of estimators. The [[Cramér–Rao bound]] states that the inverse of the Fisher information is a lower bound on the variance of any [[estimator]] of a parameter α: :<math>\operatorname{var}[\hat\alpha] \geq \frac{1}{\mathcal{I}(\alpha)}.</math> The precision to which one can estimate the estimator of a parameter α is limited by the Fisher Information of the log likelihood function. The Fisher information is a measure of the minimum error involved in estimating a parameter of a distribution and it can be viewed as a measure of the resolving power of an experiment needed to discriminate between two alternative hypothesis of a parameter.<ref name=Jaynes>{{cite book|last=Jaynes|first=E.T.|title=Probability theory, the logic of science|year=2003|publisher=Cambridge University Press|isbn=978-0521592710}}</ref> When there are ''N'' parameters :<math> \begin{bmatrix} \theta_1 \\ \theta_2 \\ \vdots \\ \theta_N \end{bmatrix},</math> then the Fisher information takes the form of an ''N''×''N'' [[positive semidefinite matrix|positive semidefinite]] [[symmetric matrix]], the Fisher information matrix, with typical element: :<math> (\mathcal{I}(\theta))_{i, j} = \operatorname{E} \left [\left (\frac{\partial}{\partial\theta_i} \ln \mathcal{L} \right) \left(\frac \partial {\partial\theta_j} \ln \mathcal{L} \right) \right ].</math> Under certain regularity conditions,<ref name=Silvey/> the Fisher Information Matrix may also be written in the following form, which is often more convenient for computation: :<math> (\mathcal{I}(\theta))_{i, j} = - \operatorname{E} \left [\frac{\partial^2}{\partial\theta_i \, \partial\theta_j} \ln (\mathcal{L}) \right ]\,.</math> With ''X''<sub>1</sub>, ..., ''X<sub>N</sub>'' [[iid]] random variables, an ''N''-dimensional "box" can be constructed with sides ''X''<sub>1</sub>, ..., ''X<sub>N</sub>''. Costa and Cover<ref name=CostaCover>{{cite book|last=Costa|first=Max, and Cover, Thomas|title=On the similarity of the entropy power inequality and the Brunn Minkowski inequality|date=September 1983|publisher=Tech.Report 48, Dept. Statistics, Stanford University|url=https://isl.stanford.edu/people/cover/papers/transIT/0837cost.pdf}}</ref> show that the (Shannon) differential entropy ''h''(''X'') is related to the volume of the typical set (having the sample entropy close to the true entropy), while the Fisher information is related to the surface of this typical set. =====Two parameters===== For ''X''<sub>1</sub>, ..., ''X''<sub>''N''</sub> independent random variables each having a beta distribution parametrized with shape parameters ''α'' and ''β'', the joint log likelihood function for ''N'' [[independent and identically distributed random variables|iid]] observations is: : <math>\ln (\mathcal{L} (\alpha, \beta\mid X) )= (\alpha - 1)\sum_{i=1}^N \ln X_i + (\beta- 1)\sum_{i=1}^N \ln (1-X_i)- N \ln \Beta(\alpha,\beta) </math> therefore the joint log likelihood function per ''N'' [[independent and identically distributed random variables|iid]] observations is :<math>\frac{1}{N} \ln(\mathcal{L} (\alpha, \beta\mid X)) = (\alpha - 1)\frac{1}{N}\sum_{i=1}^N \ln X_i + (\beta- 1) \frac{1}{N}\sum_{i=1}^N \ln (1-X_i)-\, \ln \Beta(\alpha,\beta). </math> For the two parameter case, the Fisher information has 4 components: 2 diagonal and 2 off-diagonal. Since the Fisher information matrix is symmetric, one of these off diagonal components is independent. Therefore, the Fisher information matrix has 3 independent components (2 diagonal and 1 off diagonal). Aryal and Nadarajah<ref name=Aryal>{{cite journal|last=Aryal|first=Gokarna|author2=Saralees Nadarajah|title=Information matrix for beta distributions|journal=Serdica Mathematical Journal (Bulgarian Academy of Science)| year=2004| volume=30|pages=513–526|url=http://www.math.bas.bg/serdica/2004/2004-513-526.pdf}}</ref> calculated Fisher's information matrix for the four-parameter case, from which the two parameter case can be obtained as follows: :<math>- \frac{\partial^2\ln \mathcal{L}(\alpha,\beta\mid X)}{N\partial \alpha^2}= \operatorname{var}[\ln (X)]= \psi_1(\alpha) - \psi_1(\alpha + \beta) ={\mathcal{I}}_{\alpha, \alpha}= \operatorname{E}\left [- \frac{\partial^2\ln \mathcal{L}(\alpha,\beta\mid X)}{N\partial \alpha^2} \right ] = \ln \operatorname{var}_{GX} </math> :<math>- \frac{\partial^2\ln \mathcal{L}(\alpha,\beta\mid X)}{N\,\partial \beta^2} = \operatorname{var}[\ln (1-X)] = \psi_1(\beta) - \psi_1(\alpha + \beta) ={\mathcal{I}}_{\beta, \beta}= \operatorname{E}\left [- \frac{\partial^2\ln \mathcal{L}(\alpha,\beta\mid X)}{N\partial \beta^2} \right]= \ln \operatorname{var}_{G(1-X)} </math> :<math>- \frac{\partial^2\ln \mathcal{L}(\alpha,\beta\mid X)}{N \, \partial \alpha \, \partial \beta} = \operatorname{cov}[\ln X,\ln(1-X)] = -\psi_1(\alpha+\beta) ={\mathcal{I}}_{\alpha, \beta}= \operatorname{E}\left [- \frac{\partial^2\ln \mathcal{L}(\alpha,\beta\mid X)}{N\,\partial \alpha\,\partial \beta} \right] = \ln \operatorname{cov}_{G{X,(1-X)}}</math> Since the Fisher information matrix is symmetric :<math> \mathcal{I}_{\alpha, \beta}= \mathcal{I}_{\beta, \alpha}= \ln \operatorname{cov}_{G{X,(1-X)}}</math> The Fisher information components are equal to the log geometric variances and log geometric covariance. Therefore, they can be expressed as '''[[trigamma function]]s''', denoted ψ<sub>1</sub>(α), the second of the [[polygamma function]]s, defined as the derivative of the [[digamma]] function: :<math>\psi_1(\alpha) = \frac{d^2\ln\Gamma(\alpha)}{\partial\alpha^2}=\, \frac{\partial \psi(\alpha)}{\partial\alpha}. </math> These derivatives are also derived in the {{section link||Two unknown parameters}} and plots of the log likelihood function are also shown in that section. {{section link||Geometric variance and covariance}} contains plots and further discussion of the Fisher information matrix components: the log geometric variances and log geometric covariance as a function of the shape parameters α and β. {{section link||Moments of logarithmically transformed random variables}} contains formulas for moments of logarithmically transformed random variables. Images for the Fisher information components <math>\mathcal{I}_{\alpha, \alpha}, \mathcal{I}_{\beta, \beta}</math> and <math>\mathcal{I}_{\alpha, \beta}</math> are shown in {{section link||Geometric variance}}. The determinant of Fisher's information matrix is of interest (for example for the calculation of [[Jeffreys prior]] probability). From the expressions for the individual components of the Fisher information matrix, it follows that the determinant of Fisher's (symmetric) information matrix for the beta distribution is: :<math>\begin{align} \det(\mathcal{I}(\alpha, \beta))&= \mathcal{I}_{\alpha, \alpha} \mathcal{I}_{\beta, \beta}-\mathcal{I}_{\alpha, \beta} \mathcal{I}_{\alpha, \beta} \\[4pt] &=(\psi_1(\alpha) - \psi_1(\alpha + \beta))(\psi_1(\beta) - \psi_1(\alpha + \beta))-( -\psi_1(\alpha+\beta))( -\psi_1(\alpha+\beta))\\[4pt] &= \psi_1(\alpha)\psi_1(\beta)-( \psi_1(\alpha)+\psi_1(\beta))\psi_1(\alpha + \beta)\\[4pt] \lim_{\alpha\to 0} \det(\mathcal{I}(\alpha, \beta)) &=\lim_{\beta \to 0} \det(\mathcal{I}(\alpha, \beta)) = \infty\\[4pt] \lim_{\alpha\to \infty} \det(\mathcal{I}(\alpha, \beta)) &=\lim_{\beta \to \infty} \det(\mathcal{I}(\alpha, \beta)) = 0 \end{align}</math> From [[Sylvester's criterion]] (checking whether the diagonal elements are all positive), it follows that the Fisher information matrix for the two parameter case is [[Positive-definite matrix|positive-definite]] (under the standard condition that the shape parameters are positive ''α'' > 0 and ''β'' > 0). =====Four parameters===== [[File:Fisher Information I(a,a) for alpha=beta vs range (c-a) and exponent alpha=beta - J. Rodal.png|thumb|Fisher Information ''I''(''a'',''a'') for ''α'' = ''β'' vs range (''c'' − ''a'') and exponent ''α'' = ''β'']] [[File:Fisher Information I(alpha,a) for alpha=beta, vs. range (c - a) and exponent alpha=beta - J. Rodal.png|thumb|Fisher Information ''I''(''α'',''a'') for ''α'' = ''β'', vs. range (''c'' − ''a'') and exponent ''α'' = ''β'']] If ''Y''<sub>1</sub>, ..., ''Y<sub>N</sub>'' are independent random variables each having a beta distribution with four parameters: the exponents ''α'' and ''β'', and also ''a'' (the minimum of the distribution range), and ''c'' (the maximum of the distribution range) (section titled "Alternative parametrizations", "Four parameters"), with [[probability density function]]: :<math>f(y; \alpha, \beta, a, c) = \frac{f(x;\alpha,\beta)}{c-a} =\frac{ \left (\frac{y-a}{c-a} \right )^{\alpha-1} \left (\frac{c-y}{c-a} \right)^{\beta-1} }{(c-a)B(\alpha, \beta)}=\frac{ (y-a)^{\alpha-1} (c-y)^{\beta-1} }{(c-a)^{\alpha+\beta-1}B(\alpha, \beta)}.</math> the joint log likelihood function per ''N'' [[independent and identically distributed random variables|iid]] observations is: :<math>\frac{1}{N} \ln(\mathcal{L} (\alpha, \beta, a, c\mid Y))= \frac{\alpha -1}{N}\sum_{i=1}^N \ln (Y_i - a) + \frac{\beta -1}{N}\sum_{i=1}^N \ln (c - Y_i)- \ln \Beta(\alpha,\beta) - (\alpha+\beta -1) \ln (c-a) </math> For the four parameter case, the Fisher information has 4*4=16 components. It has 12 off-diagonal components = (4×4 total − 4 diagonal). Since the Fisher information matrix is symmetric, half of these components (12/2=6) are independent. Therefore, the Fisher information matrix has 6 independent off-diagonal + 4 diagonal = 10 independent components. Aryal and Nadarajah<ref name=Aryal/> calculated Fisher's information matrix for the four parameter case as follows: :<math>- \frac{1}{N} \frac{\partial^2\ln \mathcal{L} (\alpha, \beta, a, c\mid Y)}{\partial \alpha^2}= \operatorname{var}[\ln (X)]= \psi_1(\alpha) - \psi_1(\alpha + \beta) = \mathcal{I}_{\alpha, \alpha}= \operatorname{E}\left [- \frac{1}{N} \frac{\partial^2\ln \mathcal{L} (\alpha, \beta, a, c\mid Y)}{\partial \alpha^2} \right ] = \ln (\operatorname{var_{GX}}) </math> :<math>-\frac{1}{N} \frac{\partial^2\ln \mathcal{L} (\alpha, \beta, a, c\mid Y)}{\partial \beta^2} = \operatorname{var}[\ln (1-X)] = \psi_1(\beta) - \psi_1(\alpha + \beta) ={\mathcal{I}}_{\beta, \beta}= \operatorname{E} \left [- \frac{1}{N} \frac{\partial^2\ln \mathcal{L} (\alpha, \beta, a, c\mid Y)}{\partial \beta^2} \right ] = \ln(\operatorname{var_{G(1-X)}}) </math> :<math>-\frac{1}{N} \frac{\partial^2\ln \mathcal{L} (\alpha, \beta, a, c\mid Y)}{\partial \alpha\,\partial \beta} = \operatorname{cov}[\ln X,(1-X)] = -\psi_1(\alpha+\beta) =\mathcal{I}_{\alpha, \beta}= \operatorname{E} \left [- \frac{1}{N}\frac{\partial^2\ln \mathcal{L} (\alpha, \beta, a, c\mid Y)}{\partial \alpha \, \partial \beta} \right ] = \ln(\operatorname{cov}_{G{X,(1-X)}})</math> In the above expressions, the use of ''X'' instead of ''Y'' in the expressions var[ln(''X'')] = ln(var<sub>''GX''</sub>) is ''not an error''. The expressions in terms of the log geometric variances and log geometric covariance occur as functions of the two parameter ''X'' ~ Beta(''α'', ''β'') parametrization because when taking the partial derivatives with respect to the exponents (''α'', ''β'') in the four parameter case, one obtains the identical expressions as for the two parameter case: these terms of the four parameter Fisher information matrix are independent of the minimum ''a'' and maximum ''c'' of the distribution's range. The only non-zero term upon double differentiation of the log likelihood function with respect to the exponents ''α'' and ''β'' is the second derivative of the log of the beta function: ln(B(''α'', ''β'')). This term is independent of the minimum ''a'' and maximum ''c'' of the distribution's range. Double differentiation of this term results in trigamma functions. The sections titled "Maximum likelihood", "Two unknown parameters" and "Four unknown parameters" also show this fact. The Fisher information for ''N'' [[i.i.d.]] samples is ''N'' times the individual Fisher information (eq. 11.279, page 394 of Cover and Thomas<ref name="Cover and Thomas"/>). (Aryal and Nadarajah<ref name=Aryal/> take a single observation, ''N'' = 1, to calculate the following components of the Fisher information, which leads to the same result as considering the derivatives of the log likelihood per ''N'' observations. Moreover, below the erroneous expression for <math>{\mathcal{I}}_{a, a}</math> in Aryal and Nadarajah has been corrected.) :<math>\begin{align} \alpha > 2: \quad \operatorname{E}\left [- \frac{1}{N} \frac{\partial^2\ln \mathcal{L} (\alpha, \beta, a, c\mid Y)}{\partial a^2} \right ] &= {\mathcal{I}}_{a, a}=\frac{\beta(\alpha+\beta-1)}{(\alpha-2)(c-a)^2} \\ \beta > 2: \quad \operatorname{E}\left[-\frac{1}{N} \frac{\partial^2\ln \mathcal{L} (\alpha, \beta, a, c\mid Y)}{\partial c^2} \right ] &= \mathcal{I}_{c, c} = \frac{\alpha(\alpha+\beta-1)}{(\beta-2)(c-a)^2} \\ \operatorname{E}\left[- \frac{1}{N} \frac{\partial^2\ln \mathcal{L} (\alpha, \beta, a, c\mid Y)}{\partial a \, \partial c} \right ] &= {\mathcal{I}}_{a, c} = \frac{(\alpha+\beta-1)}{(c-a)^2} \\ \alpha > 1: \quad \operatorname{E}\left[- \frac{1}{N} \frac{\partial^2\ln \mathcal{L} (\alpha, \beta, a, c\mid Y)}{\partial \alpha \, \partial a} \right ] &=\mathcal{I}_{\alpha, a} = \frac{\beta}{(\alpha-1)(c-a)} \\ \operatorname{E}\left[- \frac{1}{N} \frac{\partial^2\ln \mathcal{L} (\alpha, \beta, a, c\mid Y)}{\partial \alpha \, \partial c} \right ] &= {\mathcal{I}}_{\alpha, c} = \frac{1}{(c-a)} \\ \operatorname{E}\left[- \frac{1}{N} \frac{\partial^2\ln \mathcal{L} (\alpha, \beta, a, c\mid Y)}{\partial \beta \,\partial a} \right ] &= {\mathcal{I}}_{\beta, a} = -\frac{1}{(c-a)} \\ \beta > 1: \quad \operatorname{E}\left[- \frac{1}{N} \frac{\partial^2\ln \mathcal{L} (\alpha, \beta, a, c\mid Y)}{\partial \beta \, \partial c} \right ] &= \mathcal{I}_{\beta, c} = -\frac{\alpha}{(\beta-1)(c-a)} \end{align}</math> The lower two diagonal entries of the Fisher information matrix, with respect to the parameter ''a'' (the minimum of the distribution's range): <math>\mathcal{I}_{a, a}</math>, and with respect to the parameter ''c'' (the maximum of the distribution's range): <math>\mathcal{I}_{c, c}</math> are only defined for exponents ''α'' > 2 and ''β'' > 2 respectively. The Fisher information matrix component <math>\mathcal{I}_{a, a}</math> for the minimum ''a'' approaches infinity for exponent α approaching 2 from above, and the Fisher information matrix component <math>\mathcal{I}_{c, c}</math> for the maximum ''c'' approaches infinity for exponent ''β'' approaching 2 from above. The Fisher information matrix for the four parameter case does not depend on the individual values of the minimum ''a'' and the maximum ''c'', but only on the total range (''c'' − ''a''). Moreover, the components of the Fisher information matrix that depend on the range (''c'' − ''a''), depend only through its inverse (or the square of the inverse), such that the Fisher information decreases for increasing range (''c'' − ''a''). The accompanying images show the Fisher information components <math>\mathcal{I}_{a, a}</math> and <math>\mathcal{I}_{\alpha, a}</math>. Images for the Fisher information components <math>\mathcal{I}_{\alpha, \alpha}</math> and <math>\mathcal{I}_{\beta, \beta}</math> are shown in {{section link||Geometric variance}}. All these Fisher information components look like a basin, with the "walls" of the basin being located at low values of the parameters. The following four-parameter-beta-distribution Fisher information components can be expressed in terms of the two-parameter: ''X'' ~ Beta(α, β) expectations of the transformed ratio ((1 − ''X'')/''X'') and of its mirror image (''X''/(1 − ''X'')), scaled by the range (''c'' − ''a''), which may be helpful for interpretation: :<math>\mathcal{I}_{\alpha, a} =\frac{\operatorname{E} \left[\frac{1-X}{X} \right ]}{c-a}= \frac{\beta}{(\alpha-1)(c-a)} \text{ if }\alpha > 1</math> :<math>\mathcal{I}_{\beta, c} = -\frac{\operatorname{E} \left [\frac{X}{1-X} \right ]}{c-a}=- \frac{\alpha}{(\beta-1)(c-a)}\text{ if }\beta> 1</math> These are also the expected values of the "inverted beta distribution" or [[beta prime distribution]] (also known as beta distribution of the second kind or [[Pearson distribution|Pearson's Type VI]]) <ref name=JKB/> and its mirror image, scaled by the range (''c'' − ''a''). Also, the following Fisher information components can be expressed in terms of the harmonic (1/X) variances or of variances based on the ratio transformed variables ((1-X)/X) as follows: :<math>\begin{align} \alpha > 2: \quad \mathcal{I}_{a,a} &=\operatorname{var} \left [\frac{1}{X} \right] \left (\frac{\alpha-1}{c-a} \right )^2 =\operatorname{var} \left [\frac{1-X}{X} \right ] \left (\frac{\alpha-1}{c-a} \right)^2 = \frac{\beta(\alpha+\beta-1)}{(\alpha-2)(c-a)^2} \\ \beta > 2: \quad \mathcal{I}_{c, c} &= \operatorname{var} \left [\frac{1}{1-X} \right ] \left (\frac{\beta-1}{c-a} \right )^2 = \operatorname{var} \left [\frac{X}{1-X} \right ] \left (\frac{\beta-1}{c-a} \right )^2 =\frac{\alpha(\alpha+\beta-1)}{(\beta-2)(c-a)^2} \\ \mathcal{I}_{a, c} &=\operatorname{cov} \left [\frac{1}{X},\frac{1}{1-X} \right ]\frac{(\alpha-1)(\beta-1)}{(c-a)^2} = \operatorname{cov} \left [\frac{1-X}{X},\frac{X}{1-X} \right ] \frac{(\alpha-1)(\beta-1)}{(c-a)^2} =\frac{(\alpha+\beta-1)}{(c-a)^2} \end{align}</math> See section "Moments of linearly transformed, product and inverted random variables" for these expectations. The determinant of Fisher's information matrix is of interest (for example for the calculation of [[Jeffreys prior]] probability). From the expressions for the individual components, it follows that the determinant of Fisher's (symmetric) information matrix for the beta distribution with four parameters is: :<math>\begin{align} \det(\mathcal{I}(\alpha,\beta,a,c)) = {} & -\mathcal{I}_{a,c}^2 \mathcal{I}_{\alpha,a} \mathcal{I}_{\alpha,\beta }+\mathcal{I}_{a,a} \mathcal{I}_{a,c} \mathcal{I}_{\alpha,c} \mathcal{I}_{\alpha ,\beta}+\mathcal{I}_{a,c}^2 \mathcal{I}_{\alpha ,\beta}^2 -\mathcal{I}_{a,a} \mathcal{I}_{c,c} \mathcal{I}_{\alpha,\beta}^2\\ & {} -\mathcal{I}_{a,c} \mathcal{I}_{\alpha,a} \mathcal{I}_{\alpha ,c} \mathcal{I}_{\beta,a}+\mathcal{I}_{a,c}^2 \mathcal{I}_{\alpha ,\alpha} \mathcal{I}_{\beta,a}+2 \mathcal{I}_{c,c} \mathcal{I}_{\alpha,a} \mathcal{I}_{\alpha,\beta} \mathcal{I}_{\beta,a}\\ & {}-2\mathcal{I}_{a,c} \mathcal{I}_{\alpha ,c} \mathcal{I}_{\alpha,\beta} \mathcal{I}_{\beta ,a}+\mathcal{I}_{\alpha ,c}^2 \mathcal{I}_{\beta ,a}^2-\mathcal{I}_{c,c} \mathcal{I}_{\alpha,\alpha} \mathcal{I}_{\beta ,a}^2+\mathcal{I}_{a,c} \mathcal{I}_{\alpha ,a}^2 \mathcal{I}_{\beta ,c}\\ & {}-\mathcal{I}_{a,a} \mathcal{I}_{a,c} \mathcal{I}_{\alpha ,\alpha } \mathcal{I}_{\beta ,c}-\mathcal{I}_{a,c} \mathcal{I}_{\alpha ,a} \mathcal{I}_{\alpha ,\beta } \mathcal{I}_{\beta ,c}+\mathcal{I}_{a,a} \mathcal{I}_{\alpha ,c} \mathcal{I}_{\alpha ,\beta } \mathcal{I}_{\beta ,c}\\ & {}-\mathcal{I}_{\alpha ,a} \mathcal{I}_{\alpha ,c} \mathcal{I}_{\beta ,a} \mathcal{I}_{\beta ,c}+\mathcal{I}_{a,c} \mathcal{I}_{\alpha ,\alpha } \mathcal{I}_{\beta ,a} \mathcal{I}_{\beta ,c}-\mathcal{I}_{c,c} \mathcal{I}_{\alpha ,a}^2 \mathcal{I}_{\beta ,\beta }\\ & {}+2 \mathcal{I}_{a,c} \mathcal{I}_{\alpha ,a} \mathcal{I}_{\alpha, c} \mathcal{I}_{\beta ,\beta }-\mathcal{I}_{a,a} \mathcal{I}_{\alpha ,c}^2 \mathcal{I}_{\beta ,\beta }-\mathcal{I}_{a,c}^2 \mathcal{I}_{\alpha ,\alpha } \mathcal{I}_{\beta ,\beta }+\mathcal{I}_{a,a} \mathcal{I}_{c,c} \mathcal{I}_{\alpha ,\alpha } \mathcal{I}_{\beta ,\beta }\text{ if }\alpha, \beta> 2 \end{align}</math> Using [[Sylvester's criterion]] (checking whether the diagonal elements are all positive), and since diagonal components <math>{\mathcal{I}}_{a, a}</math> and <math>{\mathcal{I}}_{c, c}</math> have [[Mathematical singularity|singularities]] at α=2 and β=2 it follows that the Fisher information matrix for the four parameter case is [[Positive-definite matrix|positive-definite]] for α>2 and β>2. Since for α > 2 and β > 2 the beta distribution is (symmetric or unsymmetric) bell shaped, it follows that the Fisher information matrix is positive-definite only for bell-shaped (symmetric or unsymmetric) beta distributions, with inflection points located to either side of the mode. Thus, important well known distributions belonging to the four-parameter beta distribution family, like the parabolic distribution (Beta(2,2,a,c)) and the [[continuous uniform distribution|uniform distribution]] (Beta(1,1,a,c)) have Fisher information components (<math>\mathcal{I}_{a, a},\mathcal{I}_{c, c},\mathcal{I}_{\alpha, a},\mathcal{I}_{\beta, c}</math>) that blow up (approach infinity) in the four-parameter case (although their Fisher information components are all defined for the two parameter case). The four-parameter [[Wigner semicircle distribution]] (Beta(3/2,3/2,''a'',''c'')) and [[arcsine distribution]] (Beta(1/2,1/2,''a'',''c'')) have negative Fisher information determinants for the four-parameter case.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)