Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Probability
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Interpretations== {{Main|Probability interpretations}} When dealing with [[Experiment (probability theory)|random experiments]] β i.e., [[experiment]]s that are [[Randomness|random]] and [[Well-defined expression|well-defined]] β in a purely theoretical setting (like tossing a coin), probabilities can be numerically described by the number of desired outcomes, divided by the total number of all outcomes. This is referred to as '''theoretical probability''' (in contrast to [[empirical probability]], dealing with probabilities in the context of real experiments). The probability is a number between 0 and 1; the larger the probability, the more likely the desired outcome is to occur. For example, tossing a coin twice will yield "head-head", "head-tail", "tail-head", and "tail-tail" outcomes. The probability of getting an outcome of "head-head" is 1 out of 4 outcomes, or, in numerical terms, 1/4, 0.25 or 25%. The probability of getting an outcome of at least one head is 3 out of 4, or 0.75, and this event is more likely to occur. However, when it comes to practical application, there are two major competing categories of probability interpretations, whose adherents hold different views about the fundamental nature of probability: * [[Objectivity (philosophy)|Objectivists]] assign numbers to describe some objective or physical state of affairs. The most popular version of objective probability is [[frequentist probability]], which claims that the probability of a random event denotes the ''[[Frequency (statistics)|relative frequency]] of occurrence'' of an experiment's outcome when the experiment is repeated indefinitely. This interpretation considers probability to be the relative frequency "in the long run" of outcomes.<ref>{{cite book |title=The Logic of Statistical Inference |first=Ian |last=Hacking |author-link=Ian Hacking |year=1965 |publisher=Cambridge University Press |isbn=978-0-521-05165-1 }}{{page needed |date=June 2012 }}</ref> A modification of this is [[propensity probability]], which interprets probability as the tendency of some experiment to yield a certain outcome, even if it is performed only once. * [[Subjective probability#Objective and subjective Bayesian probabilities|Subjectivists]] assign numbers per subjective probability, that is, as a [[Credence (statistics)|degree of belief]].<ref>{{cite journal |title=Logical foundations and measurement of subjective probability |first=Bruno de |last=Finetti |journal=Acta Psychologica |volume=34 |year=1970 |pages=129β145 |doi=10.1016/0001-6918(70)90012-0 }}</ref> The degree of belief has been interpreted as "the price at which you would buy or sell a bet that pays 1 unit of utility if E, 0 if not E",<ref>{{cite journal |last=HΓ‘jek |first=Alan|title=Interpretations of Probability |url = http://plato.stanford.edu/archives/win2012/entries/probability-interpret/ |journal = The Stanford Encyclopedia of Philosophy |edition = Winter 2012 |editor = Edward N. Zalta |access-date=22 April 2013 |date=2002-10-21 }}</ref> although that interpretation is not universally agreed upon.<ref>{{Cite book|section= Section A.2 The de Finetti system of probability |title=Probability Theory: The Logic of Science|last=Jaynes|first=E.T.|date=2003|publisher=Cambridge University Press|isbn=978-0-521-59271-0|editor-last=Bretthorst|editor-first=G. Larry|edition=1|language=en}}</ref> The most popular version of subjective probability is [[Bayesian probability]], which includes expert knowledge as well as experimental data to produce probabilities. The expert knowledge is represented by some (subjective) [[prior probability distribution]]. These data are incorporated in a [[likelihood function]]. The product of the prior and the likelihood, when normalized, results in a [[posterior probability distribution]] that incorporates all the information known to date.<ref>{{cite book |title=Introduction to Mathematical Statistics |first1=Robert V. |last1=Hogg |first2=Allen |last2=Craig |first3=Joseph W. |last3=McKean |edition=6th |year=2004 |location=Upper Saddle River |publisher=Pearson |isbn=978-0-13-008507-8 }}{{page needed|date=June 2012}}</ref> By [[Aumann's agreement theorem]], Bayesian agents whose prior beliefs are similar will end up with similar posterior beliefs. However, sufficiently different priors can lead to different conclusions, regardless of how much information the agents share.<ref>{{Cite book|section= Section 5.3 Converging and diverging views |title=Probability Theory: The Logic of Science|last=Jaynes|first=E.T.|date=2003|publisher=Cambridge University Press|isbn=978-0-521-59271-0|editor-last=Bretthorst|editor-first=G. Larry|edition=1|language=en}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)