Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Bayes' theorem
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
====Simple form==== For events ''A'' and ''B'', provided that ''P''(''B'') β 0, :<math>P(A| B) = \frac{P(B | A) P(A)}{P(B)} . </math> In many applications, for instance in [[Bayesian inference]], the event ''B'' is fixed in the discussion and we wish to consider the effect of its having been observed on our belief in various possible events ''A''. In such situations the denominator of the last expression, the probability of the given evidence ''B'', is fixed; what we want to vary is ''A''. Bayes' theorem shows that the posterior probabilities are [[proportionality (mathematics)|proportional]] to the numerator, so the last equation becomes: :<math>P(A| B) \propto P(A) \cdot P(B| A) .</math> In words, the posterior is proportional to the prior times the likelihood. This version of Bayes' theorem is known as Bayes' rule.<ref> {{Cite book |last=Lee |first=Peter M. |title=Bayesian Statistics |chapter-url=http://www-users.york.ac.uk/~pml1/bayes/book.htm |publisher=[[John Wiley & Sons|Wiley]] |year=2012 |isbn=978-1-1183-3257-3 <!-- |isbn=978-1-1183-5977-8 --> |chapter=Chapter 1 }} </ref>. If events ''A''<sub>1</sub>, ''A''<sub>2</sub>, ..., are mutually exclusive and exhaustive, i.e., one of them is certain to occur but no two can occur together, we can determine the proportionality constant by using the fact that their probabilities must add up to one. For instance, for a given event ''A'', the event ''A'' itself and its complement Β¬''A'' are exclusive and exhaustive. Denoting the constant of proportionality by ''c'', we have: :<math>P(A| B) = c \cdot P(A) \cdot P(B| A) \text{ and } P(\neg A| B) = c \cdot P(\neg A) \cdot P(B| \neg A). </math> Adding these two formulas we deduce that: :<math> 1 = c \cdot (P(B| A)\cdot P(A) + P(B| \neg A) \cdot P(\neg A)),</math> or :<math> c = \frac{1}{P(B| A)\cdot P(A) + P(B| \neg A) \cdot P(\neg A)} = \frac 1 {P(B)}. </math>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)