Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Beta distribution
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=====Two parameters===== For ''X''<sub>1</sub>, ..., ''X''<sub>''N''</sub> independent random variables each having a beta distribution parametrized with shape parameters ''α'' and ''β'', the joint log likelihood function for ''N'' [[independent and identically distributed random variables|iid]] observations is: : <math>\ln (\mathcal{L} (\alpha, \beta\mid X) )= (\alpha - 1)\sum_{i=1}^N \ln X_i + (\beta- 1)\sum_{i=1}^N \ln (1-X_i)- N \ln \Beta(\alpha,\beta) </math> therefore the joint log likelihood function per ''N'' [[independent and identically distributed random variables|iid]] observations is :<math>\frac{1}{N} \ln(\mathcal{L} (\alpha, \beta\mid X)) = (\alpha - 1)\frac{1}{N}\sum_{i=1}^N \ln X_i + (\beta- 1) \frac{1}{N}\sum_{i=1}^N \ln (1-X_i)-\, \ln \Beta(\alpha,\beta). </math> For the two parameter case, the Fisher information has 4 components: 2 diagonal and 2 off-diagonal. Since the Fisher information matrix is symmetric, one of these off diagonal components is independent. Therefore, the Fisher information matrix has 3 independent components (2 diagonal and 1 off diagonal). Aryal and Nadarajah<ref name=Aryal>{{cite journal|last=Aryal|first=Gokarna|author2=Saralees Nadarajah|title=Information matrix for beta distributions|journal=Serdica Mathematical Journal (Bulgarian Academy of Science)| year=2004| volume=30|pages=513–526|url=http://www.math.bas.bg/serdica/2004/2004-513-526.pdf}}</ref> calculated Fisher's information matrix for the four-parameter case, from which the two parameter case can be obtained as follows: :<math>- \frac{\partial^2\ln \mathcal{L}(\alpha,\beta\mid X)}{N\partial \alpha^2}= \operatorname{var}[\ln (X)]= \psi_1(\alpha) - \psi_1(\alpha + \beta) ={\mathcal{I}}_{\alpha, \alpha}= \operatorname{E}\left [- \frac{\partial^2\ln \mathcal{L}(\alpha,\beta\mid X)}{N\partial \alpha^2} \right ] = \ln \operatorname{var}_{GX} </math> :<math>- \frac{\partial^2\ln \mathcal{L}(\alpha,\beta\mid X)}{N\,\partial \beta^2} = \operatorname{var}[\ln (1-X)] = \psi_1(\beta) - \psi_1(\alpha + \beta) ={\mathcal{I}}_{\beta, \beta}= \operatorname{E}\left [- \frac{\partial^2\ln \mathcal{L}(\alpha,\beta\mid X)}{N\partial \beta^2} \right]= \ln \operatorname{var}_{G(1-X)} </math> :<math>- \frac{\partial^2\ln \mathcal{L}(\alpha,\beta\mid X)}{N \, \partial \alpha \, \partial \beta} = \operatorname{cov}[\ln X,\ln(1-X)] = -\psi_1(\alpha+\beta) ={\mathcal{I}}_{\alpha, \beta}= \operatorname{E}\left [- \frac{\partial^2\ln \mathcal{L}(\alpha,\beta\mid X)}{N\,\partial \alpha\,\partial \beta} \right] = \ln \operatorname{cov}_{G{X,(1-X)}}</math> Since the Fisher information matrix is symmetric :<math> \mathcal{I}_{\alpha, \beta}= \mathcal{I}_{\beta, \alpha}= \ln \operatorname{cov}_{G{X,(1-X)}}</math> The Fisher information components are equal to the log geometric variances and log geometric covariance. Therefore, they can be expressed as '''[[trigamma function]]s''', denoted ψ<sub>1</sub>(α), the second of the [[polygamma function]]s, defined as the derivative of the [[digamma]] function: :<math>\psi_1(\alpha) = \frac{d^2\ln\Gamma(\alpha)}{\partial\alpha^2}=\, \frac{\partial \psi(\alpha)}{\partial\alpha}. </math> These derivatives are also derived in the {{section link||Two unknown parameters}} and plots of the log likelihood function are also shown in that section. {{section link||Geometric variance and covariance}} contains plots and further discussion of the Fisher information matrix components: the log geometric variances and log geometric covariance as a function of the shape parameters α and β. {{section link||Moments of logarithmically transformed random variables}} contains formulas for moments of logarithmically transformed random variables. Images for the Fisher information components <math>\mathcal{I}_{\alpha, \alpha}, \mathcal{I}_{\beta, \beta}</math> and <math>\mathcal{I}_{\alpha, \beta}</math> are shown in {{section link||Geometric variance}}. The determinant of Fisher's information matrix is of interest (for example for the calculation of [[Jeffreys prior]] probability). From the expressions for the individual components of the Fisher information matrix, it follows that the determinant of Fisher's (symmetric) information matrix for the beta distribution is: :<math>\begin{align} \det(\mathcal{I}(\alpha, \beta))&= \mathcal{I}_{\alpha, \alpha} \mathcal{I}_{\beta, \beta}-\mathcal{I}_{\alpha, \beta} \mathcal{I}_{\alpha, \beta} \\[4pt] &=(\psi_1(\alpha) - \psi_1(\alpha + \beta))(\psi_1(\beta) - \psi_1(\alpha + \beta))-( -\psi_1(\alpha+\beta))( -\psi_1(\alpha+\beta))\\[4pt] &= \psi_1(\alpha)\psi_1(\beta)-( \psi_1(\alpha)+\psi_1(\beta))\psi_1(\alpha + \beta)\\[4pt] \lim_{\alpha\to 0} \det(\mathcal{I}(\alpha, \beta)) &=\lim_{\beta \to 0} \det(\mathcal{I}(\alpha, \beta)) = \infty\\[4pt] \lim_{\alpha\to \infty} \det(\mathcal{I}(\alpha, \beta)) &=\lim_{\beta \to \infty} \det(\mathcal{I}(\alpha, \beta)) = 0 \end{align}</math> From [[Sylvester's criterion]] (checking whether the diagonal elements are all positive), it follows that the Fisher information matrix for the two parameter case is [[Positive-definite matrix|positive-definite]] (under the standard condition that the shape parameters are positive ''α'' > 0 and ''β'' > 0).
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)