Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Probability interpretations
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Philosophical interpretation of the axioms of probability}} {{more citations needed|date=April 2011}} {{Use dmy dates|date=October 2019}} The word "[[probability]]" has been used in a variety of ways since it was first applied to the mathematical study of [[games of chance]]. Does probability measure the real, physical, tendency of something to occur, or is it a measure of how strongly one believes it will occur, or does it draw on both these elements? In answering such questions, mathematicians interpret the probability values of [[probability theory]]. There are two broad categories<ref name=SEPIP>{{Citation | last = Hájek | first = Alan | title = Interpretations of Probability | series = The Stanford Encyclopedia of Philosophy | editor1-first= Edward N. |editor1-last= Zalta | url = http://plato.stanford.edu/archives/win2012/entries/probability-interpret/ | date = 21 October 2002 | publisher = Metaphysics Research Lab, Stanford University }}</ref>{{efn|The taxonomy of probability interpretations given here is similar to that of the longer and more complete Interpretations of Probability article in the online Stanford Encyclopedia of Philosophy. References to that article include a parenthetic section number where appropriate. A partial outline of that article: * Section 2: Criteria of adequacy for the interpretations of probability * Section 3: ** 3.1 Classical Probability ** 3.2 Logical Probability ** 3.3 Subjective Probability ** 3.4 Frequency Interpretations ** 3.5 Propensity Interpretations}}<ref name="de Elía">{{cite journal | last1 = de Elía | first1 = Ramón | last2 = Laprise | first2 = René | title = Diversity in interpretations of probability: implications for weather forecasting | journal = Monthly Weather Review | volume = 133 | issue = 5 | pages = 1129–1143 | year = 2005 | doi=10.1175/mwr2913.1| bibcode = 2005MWRv..133.1129D | s2cid = 123135127 | doi-access = free | quote="There are several schools of thought regarding the interpretation of probabilities, none of them without flaws, internal contradictions, or paradoxes." (p 1129) "There are no standard classifications of probability interpretations, and even the more popular ones may suffer subtle variations from text to text." (p 1130)}}</ref> of '''probability interpretations''' which can be called "physical" and "evidential" probabilities. Physical probabilities, which are also called objective or [[frequency probability|frequency probabilities]], are associated with random physical systems such as roulette wheels, rolling dice and radioactive atoms. In such systems, a given type of event (such as a {{sic|die|hide=y}} yielding a six) tends to occur at a persistent rate, or "relative frequency", in a long run of trials. Physical probabilities either explain, or are invoked to explain, these stable frequencies. The two main kinds of theory of physical probability are [[frequency probability|frequentist]] accounts (such as those of Venn,<ref>{{cite book |title= The Logic of Chance |last= Venn |first= John |author-link= John Venn |year= 1876 |publisher= MacMillan |location= London |url= https://books.google.com/books?id=es0AAAAAcAAJ }}</ref> Reichenbach<ref>{{cite book |title= The theory of probability, an inquiry into the logical and mathematical foundations of the calculus of probability |last= Reichenbach |first= Hans |author-link= Hans Reichenbach |year= 1948 |publisher= University of California Press}} English translation of the original 1935 German. ASIN: B000R0D5MS</ref> and von Mises)<ref>{{cite book | last = Mises | first = Richard |author-link= Richard von Mises | title = Probability, statistics, and truth | publisher = Dover Publications | location = New York | year = 1981 | isbn = 978-0-486-24214-9 }} English translation of the third German edition of 1951 which was published 30 years after the first German edition.</ref> and [[propensity probability|propensity]] accounts (such as those of Popper, Miller, Giere and Fetzer).<ref name=row>{{cite book | last = Rowbottom | first = Darrell | title = Probability | publisher = Polity | location = Cambridge | year = 2015 | isbn = 978-0745652573 }}</ref> Evidential probability, also called [[Bayesian probability]], can be assigned to any statement whatsoever, even when no random process is involved, as a way to represent its subjective plausibility, or the degree to which the statement is supported by the available evidence. On most accounts, evidential probabilities are considered to be degrees of belief, defined in terms of dispositions to gamble at certain odds. The four main evidential interpretations are the classical (e.g. Laplace's)<ref name=LaPlace /> interpretation, the subjective interpretation ([[Bruno de Finetti|de Finetti]]<ref name=deF>{{cite book |last1= de Finetti |first1= Bruno |author-link1= Bruno de Finetti |editor1-first= H. E. |editor1-last= Kyburg |others= H. E. Smokler |title= Studies in Subjective Probability |year= 1964 |publisher= Wiley |location= New York |pages= 93–158 |chapter= Foresight: its Logical laws, its Subjective Sources }} Translation of the 1937 French original with later notes added.</ref> and Savage),<ref name=savage>{{Cite book |last = Savage |first = L.J. |author-link = Leonard Jimmie Savage |year = 1954 |title = The foundations of statistics |publisher = John Wiley & Sons, Inc. |location = New York |isbn = 978-0-486-62349-8 |url-access = registration |url = https://archive.org/details/foundationsofsta00leon }}</ref> the epistemic or inductive interpretation ([[Frank P. Ramsey|Ramsey]],<ref name=ramsey>{{cite book |title= Foundations of Mathematics and Other Logical Essays |last= Ramsey |first= F. P. |author-link= Frank P. Ramsey |editor1-first= R. B. |editor1-last= Braithwaite |year= 1931 |chapter= Chapter VII, Truth and Probability (1926) |pages= 156–198 |publisher= Kegan, Paul, Trench, Trubner & Co. |location= London |chapter-url= http://fitelson.org/probability/ramsey.pdf |access-date= August 15, 2013}} Contains three chapters (essays) by Ramsey. The electronic version contains only those three.</ref> [[Richard Threlkeld Cox|Cox]])<ref>{{cite book |title= The algebra of probable inference |last= Cox |first= Richard Threlkeld |author-link= Richard Threlkeld Cox |year= 1961 |publisher= Johns Hopkins Press |location= Baltimore }}</ref> and the logical interpretation ([[John Maynard Keynes|Keynes]]<ref name=keynes>{{cite book |title= A Treatise on Probability |last= Keynes |first= John Maynard |author-link= John Maynard Keynes |year= 1921 |publisher= MacMillan |url= https://www.gutenberg.org/ebooks/32625 |access-date= August 15, 2013}}</ref> and [[Rudolf Carnap|Carnap]]).<ref name=carnap>{{cite book |title= Logical Foundations of Probability |last= Carnap |first= Rudolph |author-link= Rudolf Carnap |year= 1950 |publisher= University of Chicago Press |location= Chicago}} Carnap coined the notion ''"probability<sub>1</sub>"'' and ''"probability<sub>2</sub>"'' for evidential and physical probability, respectively.</ref> There are also evidential interpretations of probability covering groups, which are often labelled as 'intersubjective' (proposed by [[Donald A. Gillies|Gillies]]<ref name=gil>{{cite book | last = Gillies | first = Donald |author-link= Donald A. Gillies | title = Philosophical theories of probability | publisher = Routledge | location = London New York | year = 2000 | isbn = 978-0415182768 }}</ref> and Rowbottom).<ref name=row /> Some interpretations of probability are associated with approaches to [[statistical inference]], including theories of [[estimation theory|estimation]] and [[Statistical hypothesis testing|hypothesis testing]]. The physical interpretation, for example, is taken by followers of "frequentist" statistical methods, such as [[Ronald Fisher]]{{Dubious|date=February 2019}}, [[Jerzy Neyman]] and [[Egon Pearson]]. Statisticians of the opposing [[Bayesian probability|Bayesian]] school typically accept the frequency interpretation when it makes sense (although not as a definition), but there is less agreement regarding physical probabilities. Bayesians consider the calculation of evidential probabilities to be both valid and necessary in statistics. This article, however, focuses on the interpretations of probability rather than theories of statistical inference. The terminology of this topic is rather confusing, in part because probabilities are studied within a variety of academic fields. The word "frequentist" is especially tricky. To philosophers it refers to a particular theory of physical probability, one that has more or less been abandoned. To scientists, on the other hand, "[[frequentist probability]]" is just another name for physical (or objective) probability. Those who promote Bayesian inference view "[[frequentist statistics]]" as an approach to statistical inference that is based on the frequency interpretation of probability, usually relying on the [[law of large numbers]] and characterized by what is called 'Null Hypothesis Significance Testing' (NHST). Also the word "objective", as applied to probability, sometimes means exactly what "physical" means here, but is also used of evidential probabilities that are fixed by rational constraints, such as logical and epistemic probabilities. {{Blockquote|It is unanimously agreed that statistics depends somehow on probability. But, as to what probability is and how it is connected with statistics, there has seldom been such complete disagreement and breakdown of communication since the Tower of Babel. Doubtless, much of the disagreement is merely terminological and would disappear under sufficiently sharp analysis.|Savage, 1954, p. 2<ref name=savage />}} ==Philosophy== The '''philosophy of probability''' presents problems chiefly in matters of [[epistemology]] and the uneasy interface between [[mathematics|mathematical]] concepts and ordinary language as it is used by non-mathematicians. [[Probability theory]] is an established field of study in mathematics. It has its origins in correspondence discussing the mathematics of [[games of chance]] between [[Blaise Pascal]] and [[Pierre de Fermat]] in the seventeenth century,<ref>[http://www.socsci.uci.edu/~bskyrms/bio/readings/pascal_fermat.pdf Fermat and Pascal on Probability] (@ socsci.uci.edu)</ref> and was formalized and rendered [[axiom]]atic as a distinct branch of mathematics by [[Andrey Kolmogorov]] in the twentieth century. In axiomatic form, mathematical statements about probability theory carry the same sort of epistemological confidence within the [[philosophy of mathematics]] as are shared by other mathematical statements.<ref>Laszlo E. Szabo, ''[http://philosophy.elte.hu/colloquium/2001/October/Szabo/angol011008/angol011008.html A Physicalist Interpretation of Probability] {{Webarchive|url=https://web.archive.org/web/20160304041743/http://philosophy.elte.hu/colloquium/2001/October/Szabo/angol011008/angol011008.html |date=4 March 2016 }}'' (Talk presented on the Philosophy of Science Seminar, Eötvös, Budapest, 8 October 2001.)</ref><ref>Laszlo E. Szabo, Objective probability-like things with and without objective indeterminism, Studies in History and Philosophy of Modern Physics 38 (2007) 626–634 (''[http://philosophy.elte.hu/leszabo/Preprints/lesz_no_probability_preprint.pdf Preprint]'')</ref> The mathematical analysis originated in observations of the behaviour of game equipment such as [[playing card]]s and [[dice]], which are designed specifically to introduce random and equalized elements; in mathematical terms, they are subjects of [[Principle of indifference|indifference]]. This is not the only way probabilistic statements are used in ordinary human language: when people say that "''it will probably rain''", they typically do not mean that the outcome of rain versus not-rain is a random factor that the odds currently favor; instead, such statements are perhaps better understood as qualifying their expectation of rain with a degree of confidence. Likewise, when it is written that "the most probable explanation" of the name of [[Ludlow, Massachusetts]] "is that it was named after [[Roger Ludlow]]", what is meant here is not that Roger Ludlow is favored by a random factor, but rather that this is the most plausible explanation of the evidence, which admits other, less likely explanations. [[Thomas Bayes]] attempted to provide a [[logic]] that could handle varying degrees of confidence; as such, [[Bayesian probability]] is an attempt to recast the representation of probabilistic statements as an expression of the degree of confidence by which the beliefs they express are held. Though probability initially had somewhat mundane motivations, its modern influence and use is widespread ranging from [[evidence-based medicine]], through [[six sigma]], all the way to the [[probabilistically checkable proof]] and the [[string theory landscape]]. {| class="wikitable" style="text-align: center; " |+ A summary of some interpretations of probability <ref name="de Elía" /> |- ! scope="col" | ! scope="col" | Classical ! scope="col" | Frequentist ! scope="col" | Subjective ! scope="col" | Propensity |- ! scope="row" | Main hypothesis | Principle of indifference || Frequency of occurrence || Degree of belief || Degree of causal connection |- ! scope="row" | Conceptual basis | Hypothetical symmetry || Past data and reference class || Knowledge and intuition || Present state of system |- ! scope="row" | Conceptual approach | Conjectural || Empirical || Subjective || Metaphysical |- ! scope="row" | Single case possible | Yes || No || Yes || Yes |- ! scope="row" | Precise | Yes || No || No || Yes |- ! scope="row" | Problems | Ambiguity in principle of indifference || Circular definition || Reference class problem || Disputed concept |} ==Classical definition== {{Main|Classical definition of probability}} The first attempt at mathematical rigour in the field of probability, championed by [[Pierre-Simon Laplace]], is now known as the '''classical definition'''. Developed from studies of games of chance (such as rolling [[dice]]) it states that probability is shared equally between all the possible outcomes, provided these outcomes can be deemed equally likely.<ref name=SEPIP /> (3.1) {{Quotation|The theory of chance consists in reducing all the events of the same kind to a certain number of cases equally possible, that is to say, to such as we may be equally undecided about in regard to their existence, and in determining the number of cases favorable to the event whose probability is sought. The ratio of this number to that of all the cases possible is the measure of this probability, which is thus simply a fraction whose numerator is the number of favorable cases and whose denominator is the number of all the cases possible.|Pierre-Simon Laplace|A Philosophical Essay on Probabilities<ref name=LaPlace>Laplace, P. S., 1814, English edition 1951, A Philosophical Essay on Probabilities, New York: Dover Publications Inc.</ref>}} {{clear}} [[Image:Dice.jpg|thumb|180px|right|The classical definition of probability works well for situations with only a finite number of equally-likely outcomes.]] This can be represented mathematically as follows: If a random experiment can result in ''N'' mutually exclusive and equally likely outcomes and if ''N<sub>A</sub>'' of these outcomes result in the occurrence of the event ''A'', the '''probability of ''A''''' is defined by :<math>P(A) = {N_A \over N}. </math> There are two clear limitations to the classical definition.<ref name="Spanos">{{cite book | last = Spanos | first = Aris | title = Statistical foundations of econometric modelling | publisher = Cambridge University Press | location = Cambridge New York | year = 1986 | isbn = 978-0521269124 }}</ref> Firstly, it is applicable only to situations in which there is only a 'finite' number of possible outcomes. But some important random experiments, such as [[Coin flipping|tossing a coin]] until it shows heads, give rise to an [[Infinity|infinite]] set of outcomes. And secondly, it requires an a priori determination that all possible outcomes are equally likely without falling in a trap of [[circular reasoning]] by relying on the notion of probability. (In using the terminology "we may be equally undecided", Laplace assumed, by what has been called the "[[principle of insufficient reason]]", that all possible outcomes are equally likely if there is no known reason to assume otherwise, for which there is no obvious justification.<ref>{{cite book |title=Decision Behaviour, Analysis and Support |author=Simon French |author2=John Maule |author3=Nadia Papamichail |publisher=Cambridge University Press |year=2009 |isbn=978-1-139-48098-7 |url=https://books.google.com/books?id=K-eMAgAAQBAJ&dq=%22principle+of+insufficient+reason%22&pg=PA221 |page=221}}</ref><ref>{{cite book |title=Philosophy of Probability |author=Nils-Eric Sahlin |editor=J. P. Dubucs |publisher=Springer |year=2013 |isbn=978-94-015-8208-7 |chapter-url=https://books.google.com/books?id=8djyCAAAQBAJ&dq=%22equally+likely%22+%22no+obvious+justification%22&pg=PA30 |page=30 |chapter=2. On Higher Order Beliefs}}</ref>) ==Frequentism== [[Image:Roulette wheel.jpg|left|200px|thumb|For frequentists, the probability of the ball landing in any pocket can be determined only by repeated trials in which the observed result converges to the underlying probability ''in the long run''.]] {{Main|Frequency probability}} Frequentists posit that the probability of an event is its relative frequency over time,<ref name=SEPIP /> (3.4) i.e., its relative frequency of occurrence after repeating a process a large number of times under similar conditions. This is also known as aleatory probability. The events are assumed to be governed by some [[randomness|random]] physical phenomena, which are either phenomena that are predictable, in principle, with sufficient information (see [[determinism]]); or phenomena which are essentially unpredictable. Examples of the first kind include tossing [[dice]] or spinning a [[roulette]] wheel; an example of the second kind is [[radioactive decay]]. In the case of tossing a fair coin, frequentists say that the probability of getting a heads is 1/2, not because there are two equally likely outcomes but because repeated series of large numbers of trials demonstrate that the empirical frequency converges to the limit 1/2 as the number of trials goes to infinity. If we denote by <math>\textstyle n_a</math> the number of occurrences of an event <math>\mathcal{A}</math> in <math> \textstyle n</math> trials, then if <math>\lim_{n \to +\infty}{n_a \over n}=p </math> we say that ''<math>\textstyle P(\mathcal{A})=p</math>''. The frequentist view has its own problems. It is of course impossible to actually perform an infinity of repetitions of a random experiment to determine the probability of an event. But if only a finite number of repetitions of the process are performed, different relative frequencies will appear in different series of trials. If these relative frequencies are to define the probability, the probability will be slightly different every time it is measured. But the real probability should be the same every time. If we acknowledge the fact that we only can measure a probability with some error of measurement attached, we still get into problems as the error of measurement can only be expressed as a probability, the very concept we are trying to define. This renders even the frequency definition circular; see for example “[https://www.stat.berkeley.edu/~stark/Preprints/611.pdf What is the Chance of an Earthquake?]”<ref>Freedman, David and Philip B. Stark (2003)"What is the Chance of an Earthquake?" Earthquake Science and Seismic Risk.</ref> ==Subjectivism== {{Main|Bayesian probability}} Subjectivists, also known as '''Bayesians''' or followers of '''epistemic probability''', give the notion of probability a subjective status by regarding it as a measure of the 'degree of belief' of the individual assessing the uncertainty of a particular situation. [[Epistemic]] or subjective probability is sometimes called '''[[Credence (statistics)|credence]]''', as opposed to the term '''chance''' for a propensity probability. Some examples of epistemic probability are to assign a probability to the proposition that a proposed law of physics is true or to determine how probable it is that a suspect committed a crime, based on the evidence presented. The use of Bayesian probability raises the philosophical debate as to whether it can contribute valid [[theory of justification|justifications]] of [[belief]]. Bayesians point to the work of [[Frank P. Ramsey|Ramsey]]<ref name=ramsey /> (p 182) and [[Bruno de Finetti|de Finetti]]<ref name=deF /> (p 103) as proving that subjective beliefs must follow the [[laws of probability]] if they are to be coherent.<ref>{{cite book | last = Jaynes | first = E. T. | title = Probability theory the logic of science | publisher = Cambridge University Press | location = Cambridge, UK New York, NY | year = 2003 | isbn = 978-0521592710 }}</ref> Evidence casts doubt that humans will have coherent beliefs.<ref>{{cite book | last = Kahneman | first = Daniel | title = Thinking, fast and slow | publisher = Farrar, Straus and Giroux | location = New York | year = 2011 | isbn = 978-0374275631 }} The book contains numerous examples of the difference between idealized and actual thought. "[W]hen called upon to judge probability, people actually judge something else and believe they have judged probability." (p 98)</ref><ref>{{cite journal | last1 = Grove | first1 = William M. | last2 = Meehl | first2 = Paul E. | title = Comparative efficiency of informal (subjective, impressionistic) and formal (mechanical, algorithmic) prediction procedures: The clinical-statistical controversy | journal = Psychology, Public Policy, and Law | volume = 2 | issue = 2 | pages = 293–332 | year = 1996 | doi = 10.1037/1076-8971.2.2.293 | url = http://www.tc.umn.edu/~pemeehl/167GroveMeehlClinstix.pdf | url-status = dead | archive-url = https://web.archive.org/web/20111030214359/http://www.tc.umn.edu/~pemeehl/167GroveMeehlClinstix.pdf | archive-date = 30 October 2011 | df = dmy-all | citeseerx = 10.1.1.471.592 }} Statistical decisions are consistently superior to the subjective decisions of experts.</ref> The use of Bayesian probability involves specifying a [[prior probability]]. This may be obtained from consideration of whether the required prior probability is greater or lesser than a reference probability{{Clarify|date=April 2010}} associated with an [[urn model]] or a [[thought experiment]]. The issue is that for a given problem, multiple thought experiments could apply, and choosing one is a matter of judgement: different people may assign different prior probabilities, known as the [[reference class problem]]. The "[[sunrise problem]]" provides an example. ==Propensity== {{Main|Propensity probability}} Propensity theorists think of probability as a physical propensity, or disposition, or tendency of a given type of physical situation to yield an outcome of a certain kind or to yield a long run relative frequency of such an outcome.<ref>{{cite book | last = Peterson | first = Martin | title = An introduction to decision theory | publisher = Cambridge University Press | location = Cambridge, UK New York | year = 2009 | page = 140 | isbn = 978-0521716543 }}</ref> This kind of objective probability is sometimes called 'chance'. Propensities, or chances, are not relative frequencies, but purported causes of the observed stable relative frequencies. Propensities are invoked to explain why repeating a certain kind of experiment will generate given outcome types at persistent rates, which are known as propensities or chances. Frequentists are unable to take this approach, since relative frequencies do not exist for single tosses of a coin, but only for large ensembles or collectives (see "single case possible" in the table above).<ref name="de Elía" /> In contrast, a propensitist is able to use the [[law of large numbers]] to explain the behaviour of long-run frequencies. This law, which is a consequence of the axioms of probability, says that if (for example) a coin is tossed repeatedly many times, in such a way that its probability of landing heads is the same on each toss, and the outcomes are probabilistically independent, then the relative frequency of heads will be close to the probability of heads on each single toss. This law allows that stable long-run frequencies are a manifestation of invariant ''single-case'' probabilities. In addition to explaining the emergence of stable relative frequencies, the idea of propensity is motivated by the desire to make sense of single-case probability attributions in quantum mechanics, such as the probability of [[Radioactive decay|decay]] of a particular [[atom]] at a particular time. The main challenge facing propensity theories is to say exactly what propensity means. (And then, of course, to show that propensity thus defined has the required properties.) At present, unfortunately, none of the well-recognised accounts of propensity comes close to meeting this challenge. A propensity theory of probability was given by [[Charles Sanders Peirce]].<ref name="Miller 1975 123–132">{{Cite journal| last= Miller|first=Richard W.| title = Propensity: Popper or Peirce?|journal =[[British Journal for the Philosophy of Science]]| volume=26| issue=2| pages=123–132| doi=10.1093/bjps/26.2.123 | year=1975 }}</ref><ref name="Haack 1977 63–104">{{Cite journal|title=Two Fallibilists in Search of the Truth| author-link1=Susan Haack | first1=Susan|last2=Kolenda, Konstantin | last1=Haack | first2=Konstantin |last3=Kolenda|journal=Proceedings of the Aristotelian Society|issue=Supplementary Volumes|volume=51| year=1977|pages= 63–104| jstor=4106816| doi=10.1093/aristoteliansupp/51.1.63 }}</ref><ref>{{Cite book|author-link=Arthur W. Burks|last=Burks|first=Arthur W.|year=1978|title=Chance, Cause and Reason: An Inquiry into the Nature of Scientific Evidence|publisher=University of Chicago Press|pages=[https://archive.org/details/chancecausereaso0000burk/page/694 694 pages]|isbn=978-0-226-08087-1|url=https://archive.org/details/chancecausereaso0000burk/page/694}}</ref><ref>[[Charles Sanders Peirce|Peirce, Charles Sanders]] and Burks, Arthur W., ed. (1958), the [[Charles Sanders Peirce bibliography#CP|''Collected Papers of Charles Sanders Peirce'']] Volumes 7 and 8, Harvard University Press, Cambridge, MA, also Belnap Press (of Harvard University Press) edition, vols. 7-8 bound together, 798 pages, [http://www.nlx.com/collections/95 online via InteLex], reprinted in 1998 Thoemmes Continuum.</ref> A later propensity theory was proposed by philosopher [[Karl Popper]], who had only slight acquaintance with the writings of C. S. Peirce, however.<ref name="Miller 1975 123–132"/><ref name="Haack 1977 63–104"/> Popper noted that the outcome of a physical experiment is produced by a certain set of "generating conditions". When we repeat an experiment, as the saying goes, we really perform another experiment with a (more or less) similar set of generating conditions. To say that a set of generating conditions has propensity ''p'' of producing the outcome ''E'' means that those exact conditions, if repeated indefinitely, would produce an outcome sequence in which ''E'' occurred with limiting relative frequency ''p''. For Popper then, a deterministic experiment would have propensity 0 or 1 for each outcome, since those generating conditions would have same outcome on each trial. In other words, non-trivial propensities (those that differ from 0 and 1) only exist for genuinely nondeterministic experiments. A number of other philosophers, including [[David Miller (philosopher)|David Miller]] and [[Donald A. Gillies]], have proposed propensity theories somewhat similar to Popper's. Other propensity theorists (e.g. Ronald Giere<ref>{{cite book |author=Ronald N. Giere |title=Studies in Logic and the Foundations of Mathematics |chapter=Objective Single Case Probabilities and the Foundations of Statistics |chapter-url=http://www.sciencedirect.com/science/bookseries/0049237X |doi=10.1016/S0049-237X(09)70380-5 |volume=73 |pages=467–483 |publisher=[[Elsevier]] |year=1973 |isbn=978-0-444-10491-5|author-link=Ronald N. Giere }}</ref>) do not explicitly define propensities at all, but rather see propensity as defined by the theoretical role it plays in science. They argued, for example, that physical magnitudes such as [[electrical charge]] cannot be explicitly defined either, in terms of more basic things, but only in terms of what they do (such as attracting and repelling other electrical charges). In a similar way, propensity is whatever fills the various roles that physical probability plays in science. What roles does physical probability play in science? What are its properties? One central property of chance is that, when known, it constrains rational belief to take the same numerical value. [[David Lewis (philosopher)|David Lewis]] called this the ''Principal Principle'',<ref name=SEPIP /> (3.3 & 3.5) a term that philosophers have mostly adopted. For example, suppose you are certain that a particular biased coin has propensity 0.32 to land heads every time it is tossed. What is then the correct price for a gamble that pays $1 if the coin lands heads, and nothing otherwise? According to the Principal Principle, the fair price is 32 cents. ==Logical, epistemic, and inductive probability== {{Main|Probabilistic logic}} It is widely recognized that the term "probability" is sometimes used in contexts where it has nothing to do with physical randomness. Consider, for example, the claim that the extinction of the dinosaurs was '''probably''' caused by a large meteorite hitting the earth. Statements such as "Hypothesis H is probably true" have been interpreted to mean that the (presently available) [[empirical evidence]] (E, say) supports H to a high degree. This degree of support of H by E has been called the '''logical''', or '''epistemic''', or '''inductive''' probability of H given E. The differences between these interpretations are rather small, and may seem inconsequential. One of the main points of disagreement lies in the relation between probability and belief. Logical probabilities are conceived (for example in [[John Maynard Keynes|Keynes]]' [[A Treatise on Probability|Treatise on Probability]]<ref name=keynes />) to be objective, logical relations between propositions (or sentences), and hence not to depend in any way upon belief. They are degrees of (partial) [[entailment]], or degrees of [[logical consequence]], not degrees of [[belief]]. (They do, nevertheless, dictate proper degrees of belief, as is discussed below.) [[Frank P. Ramsey]], on the other hand, was skeptical about the existence of such objective logical relations and argued that (evidential) probability is "the logic of partial belief".<ref name=ramsey /> (p 157) In other words, Ramsey held that epistemic probabilities simply ''are'' degrees of rational belief, rather than being logical relations that merely ''constrain'' degrees of rational belief. Another point of disagreement concerns the ''uniqueness'' of evidential probability, relative to a given state of knowledge. [[Rudolf Carnap]] held, for example, that logical principles always determine a unique logical probability for any statement, relative to any body of evidence. Ramsey, by contrast, thought that while degrees of belief are subject to some rational constraints (such as, but not limited to, the axioms of probability) these constraints usually do not determine a unique value. Rational people, in other words, may differ somewhat in their degrees of belief, even if they all have the same information. ==Prediction== {{Main|Predictive inference}} An alternative account of probability emphasizes the role of ''prediction'' – predicting future observations on the basis of past observations, not on unobservable parameters. In its modern form, it is mainly in the Bayesian vein. This was the main function of probability before the 20th century,<ref name="geisser">{{cite book|last=Geisser|first=Seymour|author-link=Seymour Geisser|title=Predictive Inference|url=https://books.google.com/books?id=wfdlBZ_iwZoC|year=1993|publisher=CRC Press|isbn=978-0-412-03471-8}}</ref> but fell out of favor compared to the parametric approach, which modeled phenomena as a physical system that was observed with error, such as in [[celestial mechanics]]. The modern predictive approach was pioneered by [[Bruno de Finetti]], with the central idea of [[exchangeability]] – that future observations should behave like past observations.<ref name="geisser" /> This view came to the attention of the Anglophone world with the 1974 translation of de Finetti's book,<ref name="geisser" /> and has since been propounded by such statisticians as [[Seymour Geisser]]. ==Axiomatic probability== The mathematics of probability can be developed on an entirely axiomatic basis that is independent of any interpretation: see the articles on [[probability theory]] and [[probability axioms]] for a detailed treatment. ==See also== * [[Coverage probability]] * [[Frequency (statistics)]] * [[Negative probability]] * [[Philosophy of mathematics]] * [[Philosophy of statistics]] * [[Pignistic probability]] * [[Probability amplitude]] (quantum mechanics) * [[Sunrise problem]] * [[Bayesian epistemology]] ==Notes== {{Notelist}} ==References== {{Reflist|30em}} ==Further reading== * {{cite book | last = Cohen | first = L |author-link= Laurence Jonathan Cohen | title = An introduction to the philosophy of induction and probability | url = https://archive.org/details/introductiontoph0000cohe | url-access = registration | publisher = Clarendon Press Oxford University Press | location = Oxford New York | year = 1989 | isbn = 978-0198750789 }} * {{cite book | last = Eagle | first = Antony|title = Philosophy of probability : contemporary readings | publisher = Routledge | location = Abingdon, Oxon New York | year = 2011 | isbn = 978-0415483872 }} * {{cite book | last = Gillies | first = Donald |author-link= Donald A. Gillies | title = Philosophical theories of probability | publisher = Routledge | location = London New York | year = 2000 | isbn = 978-0415182768 }} A comprehensive monograph covering the four principal current interpretations: logical, subjective, frequency, propensity. Also proposes a novel intersubective interpretation. * {{cite book | last = Hacking | first = Ian |author-link= Ian Hacking | title = The emergence of probability : a philosophical study of early ideas about probability, induction and statistical inference | publisher = Cambridge University Press | location = Cambridge New York | year = 2006 | isbn = 978-0521685573 }} * [[Paul Humphreys (philosopher)|Paul Humphreys]], ed. (1994) ''[[Patrick Suppes]]: Scientific Philosopher'', Synthese Library, Springer-Verlag. ** Vol. 1: ''Probability and Probabilistic Causality''. ** Vol. 2: ''Philosophy of Physics, Theory Structure and Measurement, and Action Theory''. *Jackson, Frank, and Robert Pargetter (1982) "Physical Probability as a Propensity," ''Noûs'' 16(4): 567–583. * {{cite book | last = Khrennikov | first = Andrei | title = Interpretations of probability | publisher = Walter de Gruyter | location = Berlin New York | year = 2009 | edition = 2nd | isbn = 978-3110207484 }} Covers mostly non-Kolmogorov probability models, particularly with respect to [[quantum physics]]. * {{cite book | last = Lewis | first = David |author-link= David Kellogg Lewis | title = Philosophical papers | publisher = Oxford University Press | location = New York | year = 1983 | isbn = 978-0195036466 }} * {{cite book | last = Plato | first = Jan von | title = Creating modern probability : its mathematics, physics, and philosophy in historical perspective | publisher = Cambridge University Press | location = Cambridge England New York | year = 1994 | isbn = 978-0521597357 }} * {{cite book | last = Rowbottom | first = Darrell | title = Probability | publisher = Polity | location = Cambridge | year = 2015 | isbn = 978-0745652573 }} A highly accessible introduction to the interpretation of probability. Covers all the main interpretations, and proposes a novel group level (or 'intersubjective') interpretation. Also covers fallacies and applications of interpretations in the social and natural sciences. * {{cite book | last = Skyrms | first = Brian |author-link= Brian Skyrms | title = Choice and chance : an introduction to inductive logic | publisher = Wadsworth/Thomson Learning | location = Australia Belmont, CA | year = 2000 | isbn = 978-0534557379 }} ==External links== {{Commons category}} *{{cite SEP |url-id=probability-interpret |title=Interpretations of Probability}} *{{InPho|idea|1155|Interpretations of Probability}} *{{PhilPapers|category|interpretation-of-probability/|Interpretation of Probability}} {{DEFAULTSORT:Probability Interpretations}} [[Category:Probability interpretations| ]] [[Category:Probability theory]] [[Category:Philosophy of statistics]] [[Category:Philosophy of science]] [[Category:Epistemology]] [[Category:Interpretation (philosophy)]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Blockquote
(
edit
)
Template:Citation
(
edit
)
Template:Cite SEP
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Clarify
(
edit
)
Template:Clear
(
edit
)
Template:Comma separated entries
(
edit
)
Template:Commons category
(
edit
)
Template:Dubious
(
edit
)
Template:Efn
(
edit
)
Template:Fix
(
edit
)
Template:InPho
(
edit
)
Template:Main
(
edit
)
Template:Main other
(
edit
)
Template:More citations needed
(
edit
)
Template:Notelist
(
edit
)
Template:PhilPapers
(
edit
)
Template:Quotation
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Sic
(
edit
)
Template:Sister project
(
edit
)
Template:Use dmy dates
(
edit
)
Template:Webarchive
(
edit
)