Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Bayesian inference
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Estimates of parameters and predictions=== It is often desired to use a posterior distribution to estimate a parameter or variable. Several methods of Bayesian estimation select [[central tendency|measurements of central tendency]] from the posterior distribution. For one-dimensional problems, a unique median exists for practical continuous problems. The posterior median is attractive as a [[robust statistics|robust estimator]].<ref>{{cite book|title=Pitman's measure of closeness: A comparison of statistical estimators|first1=Pranab K.|last1=Sen|author-link1=Pranab K. Sen|first2=J. P.|last2=Keating|first3=R. L.|last3= Mason | publisher=SIAM|location=Philadelphia|year=1993}}</ref> If there exists a finite mean for the posterior distribution, then the posterior mean is a method of estimation.<ref>{{Cite book| last1=Choudhuri|first1=Nidhan|last2=Ghosal|first2=Subhashis|last3=Roy|first3=Anindya|date=2005-01-01|chapter=Bayesian Methods for Function Estimation|title=Handbook of Statistics|series=Bayesian Thinking|volume=25|pages=373β414|doi= 10.1016/s0169-7161(05)25013-7 |isbn=9780444515391|citeseerx=10.1.1.324.3052}}</ref> <math display="block">\tilde \theta = \operatorname{E}[\theta] = \int \theta \, p(\theta \mid \mathbf{X},\alpha) \, d\theta</math> Taking a value with the greatest probability defines [[maximum a posteriori estimation|maximum ''a posteriori'' (MAP)]] estimates:<ref>{{Cite web|url=https://www.probabilitycourse.com/chapter9/9_1_2_MAP_estimation.php|title=Maximum A Posteriori (MAP) Estimation|website=www.probabilitycourse.com|language=en|access-date=2017-06-02}}</ref> <math display="block">\{ \theta_{\text{MAP}}\} \subset \arg \max_\theta p(\theta \mid \mathbf{X},\alpha) .</math> There are examples where no maximum is attained, in which case the set of MAP estimates is [[empty set|empty]]. There are other methods of estimation that minimize the posterior ''[[risk]]'' (expected-posterior loss) with respect to a [[loss function]], and these are of interest to [[statistical decision theory]] using the sampling distribution ("frequentist statistics").<ref>{{Cite web|url=http://www.cogsci.ucsd.edu/~ajyu/Teaching/Tutorials/bayes_dt.pdf|title=Introduction to Bayesian Decision Theory|last=Yu|first=Angela|website=cogsci.ucsd.edu/|archive-url=https://web.archive.org/web/20130228060536/http://www.cogsci.ucsd.edu/~ajyu/Teaching/Tutorials/bayes_dt.pdf|archive-date=2013-02-28|url-status=dead}}</ref> The [[posterior predictive distribution]] of a new observation <math>\tilde{x}</math> (that is independent of previous observations) is determined by<ref>{{Cite web|url=http://people.stat.sc.edu/Hitchcock/stat535slidesday18.pdf|title=Posterior Predictive Distribution Stat Slide|last=Hitchcock|first=David|website=stat.sc.edu}}</ref> <math display="block">p(\tilde{x}|\mathbf{X},\alpha) = \int p(\tilde{x},\theta \mid \mathbf{X},\alpha) \, d\theta = \int p(\tilde{x} \mid \theta) p(\theta \mid \mathbf{X},\alpha) \, d\theta .</math>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)