Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Posterior probability
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Conditional probability used in Bayesian statistics}}{{Bayesian statistics}} The '''posterior probability''' is a type of [[conditional probability]] that results from [[Bayesian updating|updating]] the [[prior probability]] with information summarized by the [[likelihood function|likelihood]] via an application of [[Bayes' rule]].<ref>{{cite book |first=Ben |last=Lambert |chapter=The posterior β the goal of Bayesian inference |pages=121β140 |title=A Student's Guide to Bayesian Statistics |location= |publisher=Sage |year=2018 |isbn=978-1-4739-1636-4 }}</ref> From an [[Bayesian epistemology|epistemological perspective]], the posterior probability contains everything there is to know about an uncertain proposition (such as a scientific hypothesis, or parameter values), given prior knowledge and a mathematical model describing the observations available at a particular time.<ref>{{cite thesis |last=Grossman |first=Jason |title=Inferences from observations to simple statistical hypotheses |type=PhD thesis |publisher=University of Sydney |year=2005 |hdl=2123/9107 }}</ref> After the arrival of new information, the current posterior probability may serve as the prior in another round of Bayesian updating.<ref>{{Cite web |last=Etz |first=Alex |date=2015-07-25 |title=Understanding Bayes: Updating priors via the likelihood |url=https://alexanderetz.com/2015/07/25/understanding-bayes-updating-priors-via-the-likelihood/ |access-date=2022-08-18 |website=The Etz-Files |language=en}}</ref> In the context of [[Bayesian statistics]], the '''posterior [[probability distribution]]''' usually describes the epistemic uncertainty about [[statistical parameter]]s conditional on a collection of observed data. From a given posterior distribution, various [[point estimate|point]] and [[interval estimate]]s can be derived, such as the [[Maximum a posteriori estimation|maximum a posteriori]] (MAP) or the [[highest posterior density interval]] (HPDI).<ref>{{cite book |first=Jeff |last=Gill |title=Bayesian Methods: A Social and Behavioral Sciences Approach |edition=Third |publisher=Chapman & Hall |year=2014 |isbn=978-1-4398-6248-3 |chapter=Summarizing Posterior Distributions with Intervals |pages=42β48 }}</ref> But while conceptually simple, the posterior distribution is generally not tractable and therefore needs to be either analytically or numerically approximated.<ref>{{cite book |first=S. James |last=Press |chapter=Approximations, Numerical Methods, and Computer Programs |pages=69β102 |title=Bayesian Statistics : Principles, Models, and Applications |location=New York |publisher=John Wiley & Sons |year=1989 |isbn=0-471-63729-7 }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)