Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Probability interpretations
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Frequentism== [[Image:Roulette wheel.jpg|left|200px|thumb|For frequentists, the probability of the ball landing in any pocket can be determined only by repeated trials in which the observed result converges to the underlying probability ''in the long run''.]] {{Main|Frequency probability}} Frequentists posit that the probability of an event is its relative frequency over time,<ref name=SEPIP /> (3.4) i.e., its relative frequency of occurrence after repeating a process a large number of times under similar conditions. This is also known as aleatory probability. The events are assumed to be governed by some [[randomness|random]] physical phenomena, which are either phenomena that are predictable, in principle, with sufficient information (see [[determinism]]); or phenomena which are essentially unpredictable. Examples of the first kind include tossing [[dice]] or spinning a [[roulette]] wheel; an example of the second kind is [[radioactive decay]]. In the case of tossing a fair coin, frequentists say that the probability of getting a heads is 1/2, not because there are two equally likely outcomes but because repeated series of large numbers of trials demonstrate that the empirical frequency converges to the limit 1/2 as the number of trials goes to infinity. If we denote by <math>\textstyle n_a</math> the number of occurrences of an event <math>\mathcal{A}</math> in <math> \textstyle n</math> trials, then if <math>\lim_{n \to +\infty}{n_a \over n}=p </math> we say that ''<math>\textstyle P(\mathcal{A})=p</math>''. The frequentist view has its own problems. It is of course impossible to actually perform an infinity of repetitions of a random experiment to determine the probability of an event. But if only a finite number of repetitions of the process are performed, different relative frequencies will appear in different series of trials. If these relative frequencies are to define the probability, the probability will be slightly different every time it is measured. But the real probability should be the same every time. If we acknowledge the fact that we only can measure a probability with some error of measurement attached, we still get into problems as the error of measurement can only be expressed as a probability, the very concept we are trying to define. This renders even the frequency definition circular; see for example β[https://www.stat.berkeley.edu/~stark/Preprints/611.pdf What is the Chance of an Earthquake?]β<ref>Freedman, David and Philip B. Stark (2003)"What is the Chance of an Earthquake?" Earthquake Science and Seismic Risk.</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)