Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Bayes' theorem
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Events=== ====Simple form==== For events ''A'' and ''B'', provided that ''P''(''B'') β 0, :<math>P(A| B) = \frac{P(B | A) P(A)}{P(B)} . </math> In many applications, for instance in [[Bayesian inference]], the event ''B'' is fixed in the discussion and we wish to consider the effect of its having been observed on our belief in various possible events ''A''. In such situations the denominator of the last expression, the probability of the given evidence ''B'', is fixed; what we want to vary is ''A''. Bayes' theorem shows that the posterior probabilities are [[proportionality (mathematics)|proportional]] to the numerator, so the last equation becomes: :<math>P(A| B) \propto P(A) \cdot P(B| A) .</math> In words, the posterior is proportional to the prior times the likelihood. This version of Bayes' theorem is known as Bayes' rule.<ref> {{Cite book |last=Lee |first=Peter M. |title=Bayesian Statistics |chapter-url=http://www-users.york.ac.uk/~pml1/bayes/book.htm |publisher=[[John Wiley & Sons|Wiley]] |year=2012 |isbn=978-1-1183-3257-3 <!-- |isbn=978-1-1183-5977-8 --> |chapter=Chapter 1 }} </ref>. If events ''A''<sub>1</sub>, ''A''<sub>2</sub>, ..., are mutually exclusive and exhaustive, i.e., one of them is certain to occur but no two can occur together, we can determine the proportionality constant by using the fact that their probabilities must add up to one. For instance, for a given event ''A'', the event ''A'' itself and its complement Β¬''A'' are exclusive and exhaustive. Denoting the constant of proportionality by ''c'', we have: :<math>P(A| B) = c \cdot P(A) \cdot P(B| A) \text{ and } P(\neg A| B) = c \cdot P(\neg A) \cdot P(B| \neg A). </math> Adding these two formulas we deduce that: :<math> 1 = c \cdot (P(B| A)\cdot P(A) + P(B| \neg A) \cdot P(\neg A)),</math> or :<math> c = \frac{1}{P(B| A)\cdot P(A) + P(B| \neg A) \cdot P(\neg A)} = \frac 1 {P(B)}. </math> ====Alternative form==== {| class="wikitable floatright" |+ [[Contingency table]] ! {{diagonal split header|<br />Proposition| Background}} !! B !! {{tmath|\lnot B}}<br />(not {{mvar|B}}) !! Total |- |- ! {{mvar|A}} | |<math>P(B|A)\cdot P(A)</math><br /><math>= P(A|B)\cdot P(B)</math> || |<math>P(\neg B|A)\cdot P(A)</math><br /><math>= P(A|\neg B)\cdot P(\neg B)</math> |style="text-align:center;"| {{tmath|P(A)}} |- ! {{tmath|\neg A}}<br/>(not {{mvar|A}}) | nowrap|<math>P(B|\neg A)\cdot P(\neg A)</math><br /><math>= P(\neg A|B)\cdot P(B)</math> || nowrap|<math>P(\neg B|\neg A)\cdot P(\neg A)</math><br /><math>= P(\neg A|\neg B)\cdot P(\neg B)</math> || nowrap|<math>P(\neg A)</math>=<br /><math>1-P(A)</math> |- | colspan="5" style="padding:0;"| |- ! Total | style="text-align:center;" | {{tmath|P(B)}} | style="text-align:center;" | <math>P(\neg B) = 1-P(B)</math> | style="text-align:center;" | 1 |} Another form of Bayes' theorem for two competing statements or hypotheses is: :<math>P(A| B) = \frac{P(B| A) P(A)}{ P(B| A) P(A) + P(B| \neg A) P(\neg A)}.</math> For an epistemological interpretation: For proposition ''A'' and evidence or background ''B'',<ref>{{cite web|title=Bayes' Theorem: Introduction|url=http://www.trinity.edu/cbrown/bayesweb/|website=Trinity University|url-status=dead|archive-url=https://web.archive.org/web/20040821012342/http://www.trinity.edu/cbrown/bayesweb/|archive-date=21 August 2004|access-date=5 August 2014}}</ref> * <math>P(A)</math> is the [[prior probability]], the initial degree of belief in ''A''. * <math>P(\neg A)</math> is the corresponding initial degree of belief in ''not-A'', that ''A'' is false, where <math> P(\neg A) =1-P(A) </math> * <math>P(B| A)</math> is the [[conditional probability]] or likelihood, the degree of belief in ''B'' given that ''A'' is true. * <math>P(B|\neg A)</math> is the [[conditional probability]] or likelihood, the degree of belief in ''B'' given that ''A'' is false. * <math>P(A| B)</math> is the [[posterior probability]], the probability of ''A'' after taking into account ''B''. ====Extended form==== Often, for some [[partition of a set|partition]] {''A<sub>j</sub>''} of the [[sample space]], the [[Sample space|event space]] is given in terms of ''P''(''A<sub>j</sub>'') and ''P''(''B'' | ''A<sub>j</sub>''). It is then useful to compute ''P''(''B'') using the [[law of total probability]]: <math>P(B)=\sum_{j}P(B \cap A_j),</math> Or (using the multiplication rule for conditional probability),<ref>{{Cite web |title=Bayes Theorem - Formula, Statement, Proof {{!}} Bayes Rule |url=https://www.cuemath.com/data/bayes-theorem/ |access-date=2023-10-20 |website=Cuemath |language=en}}</ref> :<math>P(B) = {\sum_j P(B| A_j) P(A_j)},</math> :<math>\Rightarrow P(A_i| B) = \frac{P(B| A_i) P(A_i)}{\sum\limits_j P(B| A_j) P(A_j)}\cdot</math> In the special case where ''A'' is a [[binary variable]]: :<math>P(A| B) = \frac{P(B| A) P(A)}{ P(B| A) P(A) + P(B| \neg A) P(\neg A)}\cdot</math>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)