Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Inverse probability
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Old term for the probability distribution of an unobserved variable}} In [[probability theory]], '''inverse probability''' is an old term for the [[probability distribution]] of an unobserved variable. Today, the problem of determining an unobserved variable (by whatever method) is called [[inferential statistics]]. The method of inverse probability (assigning a probability distribution to an unobserved variable) is called [[Bayesian probability]], the distribution of data given the unobserved variable is the [[likelihood function]] (which does not by itself give a probability distribution for the parameter), and the distribution of an unobserved variable, given both data and a [[prior distribution]], is the [[posterior distribution]]. The development of the field and terminology from "inverse probability" to "Bayesian probability" is described by {{harvtxt|Fienberg|2006}}. [[File:Youngronaldfisher2.JPG|thumb|right|200px|Ronald Fisher]] The term "inverse probability" appears in an 1837 paper of [[Augustus De Morgan|De Morgan]], in reference to [[Laplace]]'s method of probability (developed in a 1774 paper, which independently discovered and popularized Bayesian methods, and a 1812 book), though the term "inverse probability" does not occur in these.{{sfn|Fienberg|2006|p=5}} Fisher uses the term in {{harvtxt|Fisher|1922}}, referring to "the fundamental paradox of inverse probability" as the source of the confusion between statistical terms that refer to the true value to be estimated, with the actual value arrived at by the estimation method, which is subject to error. Later Jeffreys uses the term in his defense of the methods of Bayes and Laplace, in {{harvtxt|Jeffreys|1939}}. The term "Bayesian", which displaced "inverse probability", was introduced by [[Ronald Fisher]] in 1950.{{sfn|Fienberg|2006|p=14}} Inverse probability, variously interpreted, was the dominant approach to statistics until the development of [[Frequentist probability|frequentism]] in the early 20th century by [[Ronald Fisher]], [[Jerzy Neyman]] and [[Egon Pearson]].{{sfn|Fienberg|2006|loc=4.1 Frequentist Alternatives to Inverse Probability, pp. 7β9}} Following the development of frequentism, the terms [[Frequentist inference|frequentist]] and [[Bayesian statistics|Bayesian]] developed to contrast these approaches, and became common in the 1950s.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)