Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Empirical Bayes method
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
====Poisson–gamma model==== For example, in the example above, let the likelihood be a [[Poisson distribution]], and let the prior now be specified by the [[conjugate prior]], which is a [[gamma distribution]] (<math>G(\alpha,\beta)</math>) (where <math>\eta = (\alpha,\beta)</math>): :<math> \rho(\theta\mid\alpha,\beta) \, d\theta = \frac{(\theta/\beta)^{\alpha-1} \, e^{-\theta / \beta} }{\Gamma(\alpha)} \, (d\theta/\beta) \text{ for } \theta > 0, \alpha > 0, \beta > 0 \,\! .</math> It is straightforward to show the [[Posterior probability|posterior]] is also a gamma distribution. Write :<math> \rho(\theta\mid y) \propto \rho(y\mid \theta) \rho(\theta\mid\alpha, \beta) ,</math> where the marginal distribution has been omitted since it does not depend explicitly on <math>\theta</math>. Expanding terms which do depend on <math>\theta</math> gives the posterior as: :<math> \rho(\theta\mid y) \propto (\theta^y\, e^{-\theta}) (\theta^{\alpha-1}\, e^{-\theta / \beta}) = \theta^{y+ \alpha -1}\, e^{- \theta (1+1 / \beta)} . </math> So the posterior density is also a [[gamma distribution]] <math>G(\alpha',\beta')</math>, where <math>\alpha' = y + \alpha</math>, and <math>\beta' = (1+1 / \beta)^{-1}</math>. Also notice that the marginal is simply the integral of the posterior over all <math>\Theta</math>, which turns out to be a [[negative binomial distribution]]. To apply empirical Bayes, we will approximate the marginal using the [[maximum likelihood]] estimate (MLE). But since the posterior is a gamma distribution, the MLE of the marginal turns out to be just the mean of the posterior, which is the point estimate <math>\operatorname{E}(\theta\mid y)</math> we need. Recalling that the mean <math>\mu</math> of a gamma distribution <math>G(\alpha', \beta')</math> is simply <math>\alpha' \beta'</math>, we have :<math> \operatorname{E}(\theta\mid y) = \alpha' \beta' = \frac{\bar{y}+\alpha}{1+1 / \beta} = \frac{\beta}{1+\beta}\bar{y} + \frac{1}{1+\beta} (\alpha \beta). </math> To obtain the values of <math>\alpha</math> and <math>\beta</math>, empirical Bayes prescribes estimating mean <math>\alpha\beta</math> and variance <math>\alpha\beta^2</math> using the complete set of empirical data. The resulting point estimate <math> \operatorname{E}(\theta\mid y) </math> is therefore like a weighted average of the sample mean <math>\bar{y}</math> and the prior mean <math>\mu = \alpha\beta</math>. This turns out to be a general feature of empirical Bayes; the point estimates for the prior (i.e. mean) will look like a weighted averages of the sample estimate and the prior estimate (likewise for estimates of the variance).
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)