Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Central limit theorem
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Convex body=== {{math theorem | math_statement = There exists a sequence {{math|''ε<sub>n</sub>'' ↓ 0}} for which the following holds. Let {{math|''n'' ≥ 1}}, and let random variables {{math|''X''<sub>1</sub>, ..., ''X<sub>n</sub>''}} have a [[Logarithmically concave function|log-concave]] [[Joint density function|joint density]] {{mvar|f}} such that {{math|1=''f''(''x''<sub>1</sub>, ..., ''x<sub>n</sub>'') = ''f''({{abs|''x''<sub>1</sub>}}, ..., {{abs|''x<sub>n</sub>''}})}} for all {{math|''x''<sub>1</sub>, ..., ''x<sub>n</sub>''}}, and {{math|1=E(''X''{{su|b=''k''|p=2}}) = 1}} for all {{math|1=''k'' = 1, ..., ''n''}}. Then the distribution of <math display="block"> \frac{X_1+\cdots+X_n}{\sqrt n} </math> is {{mvar|ε<sub>n</sub>}}-close to <math display="inline"> \mathcal{N}(0, 1)</math> in the [[Total variation distance of probability measures|total variation distance]].{{sfnp|Klartag|2007|loc=Theorem 1.2}}}} These two {{mvar|ε<sub>n</sub>}}-close distributions have densities (in fact, log-concave densities), thus, the total variance distance between them is the integral of the absolute value of the difference between the densities. Convergence in total variation is stronger than weak convergence. An important example of a log-concave density is a function constant inside a given convex body and vanishing outside; it corresponds to the uniform distribution on the convex body, which explains the term "central limit theorem for convex bodies". Another example: {{math|1=''f''(''x''<sub>1</sub>, ..., ''x<sub>n</sub>'') = const · exp(−({{abs|''x''<sub>1</sub>}}<sup>''α''</sup> + ⋯ + {{abs|''x<sub>n</sub>''}}<sup>''α''</sup>)<sup>''β''</sup>)}} where {{math|''α'' > 1}} and {{math|''αβ'' > 1}}. If {{math|1=''β'' = 1}} then {{math|''f''(''x''<sub>1</sub>, ..., ''x<sub>n</sub>'')}} factorizes into {{math|const · exp (−{{abs|''x''<sub>1</sub>}}<sup>''α''</sup>) … exp(−{{abs|''x<sub>n</sub>''}}<sup>''α''</sup>), }} which means {{math|''X''<sub>1</sub>, ..., ''X<sub>n</sub>''}} are independent. In general, however, they are dependent. The condition {{math|1=''f''(''x''<sub>1</sub>, ..., ''x<sub>n</sub>'') = ''f''({{abs|''x''<sub>1</sub>}}, ..., {{abs|''x<sub>n</sub>''}})}} ensures that {{math|''X''<sub>1</sub>, ..., ''X<sub>n</sub>''}} are of zero mean and [[uncorrelated]];{{Citation needed|date=June 2012}} still, they need not be independent, nor even [[Pairwise independence|pairwise independent]].{{Citation needed|date=June 2012}} By the way, pairwise independence cannot replace independence in the classical central limit theorem.{{sfnp|Durrett|2004|loc=Section 2.4, Example 4.5}} Here is a [[Berry–Esseen theorem|Berry–Esseen]] type result. {{math theorem | math_statement = Let {{math|''X''<sub>1</sub>, ..., ''X<sub>n</sub>''}} satisfy the assumptions of the previous theorem, then{{Sfnp|Klartag|2008|loc=Theorem 1}} <math display="block"> \left| \mathbb{P} \left( a \le \frac{ X_1+\cdots+X_n }{ \sqrt n } \le b \right) - \frac1{\sqrt{2\pi}} \int_a^b e^{-\frac{1}{2} t^2} \, dt \right| \le \frac{C}{n} </math> for all {{math|''a'' < ''b''}}; here {{mvar|C}} is a [[mathematical constant|universal (absolute) constant]]. Moreover, for every {{math|''c''<sub>1</sub>, ..., ''c<sub>n</sub>'' ∈ '''R'''}} such that {{math|1=''c''{{su|b=1|p=2}} + ⋯ + ''c''{{su|b=''n''|p=2}} = 1}}, <math display="block"> \left| \mathbb{P} \left( a \le c_1 X_1+\cdots+c_n X_n \le b \right) - \frac{1}{\sqrt{2\pi}} \int_a^b e^{-\frac{1}{2} t^2} \, dt \right| \le C \left( c_1^4+\dots+c_n^4 \right). </math>}} The distribution of {{math|{{sfrac|''X''<sub>1</sub> + ⋯ + ''X<sub>n</sub>''|{{sqrt|''n''}}}}}} need not be approximately normal (in fact, it can be uniform).{{sfnp|Klartag|2007|loc=Theorem 1.1}} However, the distribution of {{math|''c''<sub>1</sub>''X''<sub>1</sub> + ⋯ + ''c<sub>n</sub>X<sub>n</sub>''}} is close to <math display="inline"> \mathcal{N}(0, 1)</math> (in the total variation distance) for most vectors {{math|(''c''<sub>1</sub>, ..., ''c<sub>n</sub>'')}} according to the uniform distribution on the sphere {{math|1=''c''{{su|b=1|p=2}} + ⋯ + ''c''{{su|b=''n''|p=2}} = 1}}.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)