Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Inverse probability
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Old term for the probability distribution of an unobserved variable}} In [[probability theory]], '''inverse probability''' is an old term for the [[probability distribution]] of an unobserved variable. Today, the problem of determining an unobserved variable (by whatever method) is called [[inferential statistics]]. The method of inverse probability (assigning a probability distribution to an unobserved variable) is called [[Bayesian probability]], the distribution of data given the unobserved variable is the [[likelihood function]] (which does not by itself give a probability distribution for the parameter), and the distribution of an unobserved variable, given both data and a [[prior distribution]], is the [[posterior distribution]]. The development of the field and terminology from "inverse probability" to "Bayesian probability" is described by {{harvtxt|Fienberg|2006}}. [[File:Youngronaldfisher2.JPG|thumb|right|200px|Ronald Fisher]] The term "inverse probability" appears in an 1837 paper of [[Augustus De Morgan|De Morgan]], in reference to [[Laplace]]'s method of probability (developed in a 1774 paper, which independently discovered and popularized Bayesian methods, and a 1812 book), though the term "inverse probability" does not occur in these.{{sfn|Fienberg|2006|p=5}} Fisher uses the term in {{harvtxt|Fisher|1922}}, referring to "the fundamental paradox of inverse probability" as the source of the confusion between statistical terms that refer to the true value to be estimated, with the actual value arrived at by the estimation method, which is subject to error. Later Jeffreys uses the term in his defense of the methods of Bayes and Laplace, in {{harvtxt|Jeffreys|1939}}. The term "Bayesian", which displaced "inverse probability", was introduced by [[Ronald Fisher]] in 1950.{{sfn|Fienberg|2006|p=14}} Inverse probability, variously interpreted, was the dominant approach to statistics until the development of [[Frequentist probability|frequentism]] in the early 20th century by [[Ronald Fisher]], [[Jerzy Neyman]] and [[Egon Pearson]].{{sfn|Fienberg|2006|loc=4.1 Frequentist Alternatives to Inverse Probability, pp. 7–9}} Following the development of frequentism, the terms [[Frequentist inference|frequentist]] and [[Bayesian statistics|Bayesian]] developed to contrast these approaches, and became common in the 1950s. == Details == In modern terms, given a probability distribution ''p''(''x''|θ) for an observable quantity ''x'' conditional on an unobserved variable θ, the "inverse probability" is the [[posterior distribution]] ''p''(θ|''x''), which depends both on the likelihood function (the inversion of the probability distribution) and a prior distribution. The distribution ''p''(''x''|θ) itself is called the '''direct probability'''. The ''inverse probability problem'' (in the 18th and 19th centuries) was the problem of estimating a parameter from experimental data in the experimental sciences, especially [[astronomy]] and [[biology]]. A simple example would be the problem of estimating the position of a star in the sky (at a certain time on a certain date) for purposes of [[navigation]]. Given the data, one must estimate the true position (probably by averaging). This problem would now be considered one of [[inferential statistics]]. The terms "direct probability" and "inverse probability" were in use until the middle part of the 20th century, when the terms "[[likelihood function]]" and "posterior distribution" became prevalent. ==See also== *[[Bayesian probability]] *[[Bayes' theorem]] ==References== {{reflist}} {{refbegin}} * {{cite journal|last1=Fisher|first1=R. A.|title=On the Mathematical Foundations of Theoretical Statistics|journal=Philos. Trans. R. Soc. Lond. A|date=1922|volume=222A|pages=309–368}} ** See reprint in {{cite book|last1=Kotz|first1=S.|title=Breakthroughs in Statistics Volume 1|date=1992|publisher=Springer-Verlag}} * {{cite book|last1=Jeffreys|first1=Harold|title=Theory of Probability|date=1939|publisher=Oxford University Press|edition=Third}} * {{cite journal|last=Fienberg|first=Stephen E.|year=2006|title=When Did Bayesian Inference Become "Bayesian"?|journal=Bayesian Analysis|volume=1|issue=1|pages=1–40|url=https://projecteuclid.org/download/pdf_1/euclid.ba/1340371071|doi=10.1214/06-BA101|doi-access=free}} {{refend}} {{DEFAULTSORT:Inverse Probability}} [[Category:Statistical inference]] [[Category:Probability interpretations]] [[Category:Bayesian statistics]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Harvtxt
(
edit
)
Template:Refbegin
(
edit
)
Template:Refend
(
edit
)
Template:Reflist
(
edit
)
Template:Sfn
(
edit
)
Template:Short description
(
edit
)