Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Law of the iterated logarithm
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Mathematical theorem}} [[File:Law of large numbers.gif|thumb|Plot of <math>S_n/n</math> (red), its standard deviation <math>1/\sqrt{n}</math> (blue) and its bound <math>\sqrt{2\log\log n/n}</math> given by LIL (green). Notice the way it randomly switches from the upper bound to the lower bound. Both axes are non-linearly transformed (as explained in figure summary) to make this effect more visible.]] In [[probability theory]], the '''law of the iterated logarithm''' describes the magnitude of the fluctuations of a [[random walk]]. The original statement of the law of the iterated logarithm is due to [[Aleksandr Khinchin|A. Ya. Khinchin]] (1924).<ref>[[Aleksandr Khinchin|A. Khinchine]]. "Über einen Satz der Wahrscheinlichkeitsrechnung", ''[[Fundamenta Mathematicae]]'' 6 (1924): pp. 9–20 ''(The author's name is shown here in an alternate transliteration.)''</ref> Another statement was given by [[Andrey Kolmogorov|A. N. Kolmogorov]] in 1929.<ref name="Kolmogorov1929">[[Andrey Kolmogorov|A. Kolmogoroff]]. [http://www-gdz.sub.uni-goettingen.de/cgi-bin/digbib.cgi?PPN235181684_0101 "Über das Gesetz des iterierten Logarithmus"]. ''Mathematische Annalen'', 101: 126–135, 1929.</ref> ==Statement== Let {''Y''<sub>''n''</sub>} be independent, identically distributed [[random variables]] with zero means and unit variances. Let ''S''<sub>''n''</sub> = ''Y''<sub>1</sub> + ... + ''Y''<sub>''n''</sub>. Then : <math> \limsup_{n \to \infty} \frac{|S_n|}{\sqrt{2n \log\log n}} = 1 \quad \text{a.s.}, </math> where "log" is the [[natural logarithm]], "lim sup" denotes the [[limit superior]], and "a.s." stands for "[[almost surely]]".<ref>[[Leo Breiman]]. ''Probability''. Original edition published by Addison-Wesley, 1968; reprinted by Society for Industrial and Applied Mathematics, 1992. (See Sections 3.9, 12.9, and 12.10; Theorem 3.52 specifically.)</ref><ref>R. Durrett. ''Probability: Theory and Examples''. Fourth edition published by Cambridge University Press in 2010. (See Theorem 8.8.3.)</ref> Another statement given by [[Andrey Kolmogorov|A. N. Kolmogorov]] in 1929<ref name="Kolmogorov1929" /> is as follows. Let <math> \{ Y_n \} </math> be independent [[random variables]] with zero means and finite variances. Let <math> S_n = Y_1 + \dots + Y_n </math> and <math> B_n = \operatorname{Var}(Y_1) + \dots + \operatorname{Var}(Y_n) </math>. If <math> B_n \to \infty </math> and there exists a sequence of positive constants <math> \{ M_n \} </math> such that <math> |Y_n| \le M_n </math> a.s. and : <math> M_n \;=\; o \left( \sqrt{\frac{B_n}{\log \log B_n}} \right), </math> then we have : <math> \limsup_{n \to \infty} \frac{|S_n|}{\sqrt{2 B_n \log\log B_n}} = 1 \quad \text{a.s.} </math> Note that, the first statement covers the case of the standard normal distribution, but the second does not. ==Discussion== The law of iterated logarithms operates "in between" the [[law of large numbers]] and the [[central limit theorem]]. There are two versions of the law of large numbers — [[weak law of large numbers|the weak]] and [[strong law of large numbers|the strong]] — and they both state that the sums ''S''<sub>''n''</sub>, scaled by ''n''<sup>−1</sup>, converge to zero, respectively [[convergence of random variables#Convergence in probability|in probability]] and [[convergence of random variables#Almost sure convergence|almost surely]]: : <math> \frac{S_n}{n} \ \xrightarrow{p}\ 0, \qquad \frac{S_n}{n} \ \xrightarrow{a.s.} 0, \qquad \text{as}\ \ n\to\infty. </math> On the other hand, the central limit theorem states that the sums ''S''<sub>''n''</sub> scaled by the factor ''n''<sup>−1/2</sup> converge in distribution to a standard normal distribution. By [[Kolmogorov's zero–one law]], for any fixed ''M'', the probability that the event <math> \limsup_n \frac{S_n}{\sqrt{n}} \geq M </math> occurs is 0 or 1. Then : <math> \Pr\left( \limsup_n \frac{S_n}{\sqrt{n}} \geq M \right) \geqslant \limsup_n \Pr\left( \frac{S_n}{\sqrt{n}} \geq M \right) = \Pr\left( \mathcal{N}(0, 1) \geq M \right) > 0</math> so :<math>\limsup_n \frac{S_n}{\sqrt{n}}=\infty \qquad \text{with probability 1.}</math> An identical argument shows that :<math> \liminf_n \frac{S_n}{\sqrt{n}}=-\infty \qquad \text{with probability 1.}</math> This implies that these quantities cannot converge almost surely. In fact, they cannot even converge in probability, which follows from the equality :<math>\frac{S_{2n}}{\sqrt{2n}}-\frac{S_n}{\sqrt{n}} = \frac1{\sqrt2}\frac{S_{2n}-S_n}{\sqrt{n}} - \left (1-\frac1\sqrt2 \right )\frac{S_n}{\sqrt{n}}</math> and the fact that the random variables :<math>\frac{S_n}{\sqrt{n}}\quad \text{and} \quad \frac{S_{2n}-S_n}{\sqrt{n}}</math> are independent and both converge in distribution to <math>\mathcal{N}(0, 1).</math> The ''law of the iterated logarithm'' provides the scaling factor where the two limits become different: : <math> \frac{S_n}{\sqrt{2n\log\log n}} \ \xrightarrow{p}\ 0, \qquad \frac{S_n}{\sqrt{2n\log\log n}} \ \stackrel{a.s.}{\nrightarrow}\ 0, \qquad \text{as}\ \ n\to\infty. </math> Thus, although the absolute value of the quantity <math>S_n/\sqrt{2n\log\log n}</math> is less than any predefined ''ε'' > 0 with probability approaching one, it will nevertheless almost surely be greater than ''ε'' infinitely often; in fact, the quantity will be visiting the neighborhoods of any point in the interval (-1,1) almost surely. [[File:LimitTheoremsExhibition.png|thumb|Exhibition of Limit Theorems and their interrelationship]] ==Generalizations and variants== The law of the iterated logarithm (LIL) for a sum of independent and identically distributed (i.i.d.) random variables with zero mean and bounded increment dates back to [[Khinchin]] and [[Kolmogorov]] in the 1920s. Since then, there has been a tremendous amount of work on the LIL for various kinds of dependent structures and for stochastic processes. The following is a small sample of notable developments. [[Philip Hartman|Hartman]]–[[Aurel Wintner|Wintner]] (1940) generalized LIL to random walks with increments with zero mean and finite variance. De Acosta (1983) gave a simple proof of the Hartman–Wintner version of the LIL.<ref>A. de Acosta: "[https://projecteuclid.org/journals/annals-of-probability/volume-11/issue-2/A-New-Proof-of-the-Hartman-Wintner-Law-of-the/10.1214/aop/1176993596.full A New Proof of the Hartman-Wintner Law of the Iterated Logarithm]". Ann. Probab., 1983.</ref> [[Chung Kai-lai|Chung]] (1948) proved another version of the law of the iterated logarithm for the absolute value of a brownian motion.<ref>{{cite journal|first1=Kai-lai|last1=Chung|title=On the maximum partial sums of sequences of independent random variables|journal=Trans. Am. Math. Soc.|volume=61|date=1948|pages=205–233}}</ref> [[Volker Strassen|Strassen]] (1964) studied the LIL from the point of view of invariance principles.<ref>V. Strassen: "[https://link.springer.com/article/10.1007/BF00534910 An invariance principle for the law of the iterated logarithm]". Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, 1964.</ref> Stout (1970) generalized the LIL to stationary ergodic martingales.<ref>W. F. Stout: "[https://projecteuclid.org/journals/annals-of-mathematical-statistics/volume-41/issue-6/The-Hartman-Wintner-Law-of-the-Iterated-Logarithm-for-Martingales/10.1214/aoms/1177696721.full The Hartman-Wintner Law of the Iterated Logarithm for Martingales]". Ann. Math. Statist., 1970.</ref> Wittmann (1985) generalized Hartman–Wintner version of LIL to random walks satisfying milder conditions.<ref>R. Wittmann: "[https://link.springer.com/article/10.1007/BF00535343 A general law of iterated logarithm]". Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, 1985.</ref> Vovk (1987) derived a version of LIL valid for a single chaotic sequence (Kolmogorov random sequence).<ref>V. Vovk: "[https://epubs.siam.org/doi/abs/10.1137/1132061 The Law of the Iterated Logarithm for Random Kolmogorov, or Chaotic, Sequences]". Theory Probab. Appl., 1987.</ref> This is notable, as it is outside the realm of classical probability theory. [[Yongge Wang]] (1996) showed that the law of the iterated logarithm holds for polynomial time pseudorandom sequences also.<ref>Y. Wang: "[http://webpages.uncc.edu/yonwang/papers/CCC96.pdf The law of the iterated logarithm for ''p''-random sequences]". In: Proc. 11th IEEE Conference on Computational Complexity (CCC), pages 180–189. IEEE Computer Society Press, 1996.</ref><ref>Y. Wang: [http://webpages.uncc.edu/yonwang/papers/thesis.pdf ''Randomness and Complexity'']. PhD thesis, 1996.</ref> The Java-based software [http://webpages.uncc.edu/yonwang/liltest/ testing tool] tests whether a pseudorandom generator outputs sequences that satisfy the LIL. Balsubramani (2014) proved a non-asymptotic LIL that holds over finite-time [[Martingale (probability theory)|martingale]] sample paths.<ref>A. Balsubramani: "[https://arxiv.org/abs/1405.2639 Sharp finite-time iterated-logarithm martingale concentration]". arXiv:1405.2639.</ref> This subsumes the martingale LIL as it provides matching finite-sample concentration and anti-concentration bounds, and enables sequential testing<ref>A. Balsubramani and A. Ramdas: "[http://www.auai.org/uai2016/proceedings/supp/270_supp.pdf Sequential nonparametric testing with the law of the iterated logarithm]". 32nd Conference on Uncertainty in Artificial Intelligence (UAI).</ref> and other applications.<ref>C. Daskalakis and Y. Kawase: "[http://drops.dagstuhl.de/opus/volltexte/2017/7823/pdf/LIPIcs-ESA-2017-32.pdf Optimal Stopping Rules for Sequential Hypothesis Testing]". In 25th Annual European Symposium on Algorithms (ESA 2017). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik.</ref> ==See also== * [[Iterated logarithm]] * [[Wiener process|Brownian motion]] ==Notes== {{reflist}} {{Stochastic processes}} [[Category:Asymptotic theory (statistics)]] [[Category:Theorems in statistics]] [[Category:Stochastic processes]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite journal
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Stochastic processes
(
edit
)