Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Probability theory
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Convergence of random variables== {{Main|Convergence of random variables}} In probability theory, there are several notions of convergence for [[random variable]]s. They are listed below in the order of strength, i.e., any subsequent notion of convergence in the list implies convergence according to all of the preceding notions. ;Weak convergence: A sequence of random variables <math>X_1,X_2,\dots,\,</math> converges {{em|weakly}} to the random variable <math>X\,</math> if their respective CDF converges<math>F_1,F_2,\dots\,</math> converges to the CDF <math>F\,</math> of <math>X\,</math>, wherever <math>F\,</math> is [[continuous function|continuous]]. Weak convergence is also called {{em|convergence in distribution}}. :Most common shorthand notation: <math>\displaystyle X_n \, \xrightarrow{\mathcal D} \, X</math> ;Convergence in probability: The sequence of random variables <math>X_1,X_2,\dots\,</math> is said to converge towards the random variable <math>X\,</math> {{em|in probability}} if <math>\lim_{n\rightarrow\infty}P\left(\left|X_n-X\right|\geq\varepsilon\right)=0</math> for every ε > 0. :Most common shorthand notation: <math>\displaystyle X_n \, \xrightarrow{P} \, X</math> ;Strong convergence: The sequence of random variables <math>X_1,X_2,\dots\,</math> is said to converge towards the random variable <math>X\,</math> {{em|strongly}} if <math>P(\lim_{n\rightarrow\infty} X_n=X)=1</math>. Strong convergence is also known as {{em|almost sure convergence}}. :Most common shorthand notation: <math>\displaystyle X_n \, \xrightarrow{\mathrm{a.s.}} \, X</math> As the names indicate, weak convergence is weaker than strong convergence. In fact, strong convergence implies convergence in probability, and convergence in probability implies weak convergence. The reverse statements are not always true. ===Law of large numbers=== {{Main|Law of large numbers}} Common intuition suggests that if a fair coin is tossed many times, then ''roughly'' half of the time it will turn up ''heads'', and the other half it will turn up ''tails''. Furthermore, the more often the coin is tossed, the more likely it should be that the ratio of the number of ''heads'' to the number of ''tails'' will approach unity. Modern probability theory provides a formal version of this intuitive idea, known as the {{em|law of large numbers}}. This law is remarkable because it is not assumed in the foundations of probability theory, but instead emerges from these foundations as a theorem. Since it links theoretically derived probabilities to their actual frequency of occurrence in the real world, the law of large numbers is considered as a pillar in the history of statistical theory and has had widespread influence.<ref>{{cite web|url=http://www.leithner.com.au/circulars/circular17.htm|archive-url=https://web.archive.org/web/20140126113323/http://www.leithner.com.au/circulars/circular17.htm|archive-date=2014-01-26 |title=Leithner & Co Pty Ltd - Value Investing, Risk and Risk Management - Part I |publisher=Leithner.com.au |date=2000-09-15 |access-date=2012-02-12}}</ref> <!-- Note to editors: Please provide better citation for the historical importance of LLN if you have it --> The {{em|law of large numbers}} (LLN) states that the sample average :<math>\overline{X}_n=\frac1n{\sum_{k=1}^n X_k}</math> of a [[sequence]] of [[independent and identically distributed random variables]] <math>X_k</math> converges towards their common [[Expected value|expectation]] (expected value) <math>\mu</math>, provided that the expectation of <math>|X_k|</math> is finite. It is in the different forms of [[convergence of random variables]] that separates the ''weak'' and the ''strong'' law of large numbers<ref>{{Cite book|last=Dekking|first=Michel|url=http://archive.org/details/modernintroducti00fmde|title=A modern introduction to probability and statistics : understanding why and how|date=2005|publisher=London : Springer|others=Library Genesis|isbn=978-1-85233-896-1|pages=180–194|chapter=Chapter 13: The law of large numbers}}</ref> :Weak law: <math>\displaystyle \overline{X}_n \, \xrightarrow{P} \, \mu</math> for <math>n \to \infty</math> :Strong law: <math>\displaystyle \overline{X}_n \, \xrightarrow{\mathrm{a.\,s.}} \, \mu </math> for <math> n \to \infty .</math> It follows from the LLN that if an event of probability ''p'' is observed repeatedly during independent experiments, the ratio of the observed frequency of that event to the total number of repetitions converges towards ''p''. For example, if <math>Y_1,Y_2,...\,</math> are independent [[Bernoulli distribution|Bernoulli random variables]] taking values 1 with probability ''p'' and 0 with probability 1-''p'', then <math>\textrm{E}(Y_i)=p</math> for all ''i'', so that <math>\bar Y_n</math> converges to ''p'' [[almost surely]]. ===Central limit theorem=== {{Main|Central limit theorem}} The central limit theorem (CLT) explains the ubiquitous occurrence of the [[normal distribution]] in nature, and this theorem, according to David Williams, "is one of the great results of mathematics."<ref>[[David Williams (mathematician)|David Williams]], "Probability with martingales", Cambridge 1991/2008</ref><!-- Why? --> The theorem states that the [[average]] of many independent and identically distributed random variables with finite variance tends towards a normal distribution ''irrespective'' of the distribution followed by the original random variables. Formally, let <math>X_1,X_2,\dots\,</math> be independent random variables with [[mean]] <math>\mu</math> and [[variance]] <math>\sigma^2 > 0.\,</math> Then the sequence of random variables :<math>Z_n=\frac{\sum_{i=1}^n (X_i - \mu)}{\sigma\sqrt{n}}\,</math> converges in distribution to a [[standard normal]] random variable. For some classes of random variables, the classic central limit theorem works rather fast, as illustrated in the [[Berry–Esseen theorem]]. For example, the distributions with finite first, second, and third moment from the [[exponential family]]; on the other hand, for some random variables of the [[heavy tail]] and [[fat tail]] variety, it works very slowly or may not work at all: in such cases one may use the [[Stable distribution#A generalized central limit theorem|Generalized Central Limit Theorem]] (GCLT).
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)