Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Probability theory
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Law of large numbers=== {{Main|Law of large numbers}} Common intuition suggests that if a fair coin is tossed many times, then ''roughly'' half of the time it will turn up ''heads'', and the other half it will turn up ''tails''. Furthermore, the more often the coin is tossed, the more likely it should be that the ratio of the number of ''heads'' to the number of ''tails'' will approach unity. Modern probability theory provides a formal version of this intuitive idea, known as the {{em|law of large numbers}}. This law is remarkable because it is not assumed in the foundations of probability theory, but instead emerges from these foundations as a theorem. Since it links theoretically derived probabilities to their actual frequency of occurrence in the real world, the law of large numbers is considered as a pillar in the history of statistical theory and has had widespread influence.<ref>{{cite web|url=http://www.leithner.com.au/circulars/circular17.htm|archive-url=https://web.archive.org/web/20140126113323/http://www.leithner.com.au/circulars/circular17.htm|archive-date=2014-01-26 |title=Leithner & Co Pty Ltd - Value Investing, Risk and Risk Management - Part I |publisher=Leithner.com.au |date=2000-09-15 |access-date=2012-02-12}}</ref> <!-- Note to editors: Please provide better citation for the historical importance of LLN if you have it --> The {{em|law of large numbers}} (LLN) states that the sample average :<math>\overline{X}_n=\frac1n{\sum_{k=1}^n X_k}</math> of a [[sequence]] of [[independent and identically distributed random variables]] <math>X_k</math> converges towards their common [[Expected value|expectation]] (expected value) <math>\mu</math>, provided that the expectation of <math>|X_k|</math> is finite. It is in the different forms of [[convergence of random variables]] that separates the ''weak'' and the ''strong'' law of large numbers<ref>{{Cite book|last=Dekking|first=Michel|url=http://archive.org/details/modernintroducti00fmde|title=A modern introduction to probability and statistics : understanding why and how|date=2005|publisher=London : Springer|others=Library Genesis|isbn=978-1-85233-896-1|pages=180β194|chapter=Chapter 13: The law of large numbers}}</ref> :Weak law: <math>\displaystyle \overline{X}_n \, \xrightarrow{P} \, \mu</math> for <math>n \to \infty</math> :Strong law: <math>\displaystyle \overline{X}_n \, \xrightarrow{\mathrm{a.\,s.}} \, \mu </math> for <math> n \to \infty .</math> It follows from the LLN that if an event of probability ''p'' is observed repeatedly during independent experiments, the ratio of the observed frequency of that event to the total number of repetitions converges towards ''p''. For example, if <math>Y_1,Y_2,...\,</math> are independent [[Bernoulli distribution|Bernoulli random variables]] taking values 1 with probability ''p'' and 0 with probability 1-''p'', then <math>\textrm{E}(Y_i)=p</math> for all ''i'', so that <math>\bar Y_n</math> converges to ''p'' [[almost surely]].
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)