Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Normal distribution
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Other properties === {{ordered list | 1 = If the characteristic function <math display=inline>\phi_X</math> of some random variable {{tmath|X}} is of the form <math display=inline>\phi_X(t) = \exp Q(t)</math> in a neighborhood of zero, where <math display=inline>Q(t)</math> is a [[polynomial]], then the '''Marcinkiewicz theorem''' (named after [[Józef Marcinkiewicz]]) asserts that {{tmath|Q}} can be at most a quadratic polynomial, and therefore {{tmath|X}} is a normal random variable.<ref name="Bryc 1995 35" /> The consequence of this result is that the normal distribution is the only distribution with a finite number (two) of non-zero [[cumulant]]s. | 2 = If {{tmath|X}} and {{tmath|Y}} are [[jointly normal]] and [[uncorrelated]], then they are [[independence (probability theory)|independent]]. The requirement that {{tmath|X}} and {{tmath|Y}} should be ''jointly'' normal is essential; without it the property does not hold.<ref>[http://www.math.uiuc.edu/~r-ash/Stat/StatLec21-25.pdf UIUC, Lecture 21. ''The Multivariate Normal Distribution''], 21.6:"Individually Gaussian Versus Jointly Gaussian".</ref><ref>Edward L. Melnick and Aaron Tenenbein, "Misspecifications of the Normal Distribution", ''[[The American Statistician]]'', volume 36, number 4 November 1982, pages 372–373</ref><sup>[[Normally distributed and uncorrelated does not imply independent|[proof]]]</sup> For non-normal random variables uncorrelatedness does not imply independence. | 3 = The [[Kullback–Leibler divergence]] of one normal distribution <math display=inline>X_1 \sim N(\mu_1, \sigma^2_1)</math> from another <math display=inline>X_2 \sim N(\mu_2, \sigma^2_2)</math> is given by:<ref>{{cite web |url=http://www.allisons.org/ll/MML/KL/Normal/|title=Kullback Leibler (KL) Distance of Two Normal (Gaussian) Probability Distributions| website=Allisons.org |date=2007-12-05 |access-date=2017-03-03}}</ref> <math display=block> D_\mathrm{KL}( X_1 \parallel X_2 ) = \frac{(\mu_1 - \mu_2)^2}{2\sigma_2^2} + \frac{1}{2}\left( \frac{\sigma_1^2}{\sigma_2^2} - 1 - \ln\frac{\sigma_1^2}{\sigma_2^2} \right) </math> The [[Hellinger distance]] between the same distributions is equal to <math display=block> H^2(X_1,X_2) = 1 - \sqrt{\frac{2\sigma_1\sigma_2}{\sigma_1^2+\sigma_2^2}} \exp\left(-\frac{1}{4}\frac{(\mu_1-\mu_2)^2}{\sigma_1^2+\sigma_2^2}\right) </math> | 4 = The [[Fisher information matrix]] for a normal distribution w.r.t. {{tmath|\mu}} and <math display=inline>\sigma^2</math> is diagonal and takes the form <math display=block> \mathcal I (\mu, \sigma^2) = \begin{pmatrix} \frac{1}{\sigma^2} & 0 \\ 0 & \frac{1}{2\sigma^4} \end{pmatrix} </math> | 5 = The [[conjugate prior]] of the mean of a normal distribution is another normal distribution.<ref>{{cite web|url=http://www.cs.berkeley.edu/~jordan/courses/260-spring10/lectures/lecture5.pdf|title=Stat260: Bayesian Modeling and Inference: The Conjugate Prior for the Normal Distribution|first=Michael I.|last=Jordan|date=February 8, 2010}}</ref> Specifically, if <math display=inline>x_1, \ldots, x_n</math> are iid <math display=inline>\sim N(\mu, \sigma^2)</math> and the prior is <math display=inline>\mu \sim N(\mu_0 , \sigma^2_0)</math>, then the posterior distribution for the estimator of {{tmath|\mu}} will be <math display=block> \mu \mid x_1,\ldots,x_n \sim \mathcal{N}\left( \frac{\frac{\sigma^2}{n}\mu_0 + \sigma_0^2\bar{x}}{\frac{\sigma^2}{n}+\sigma_0^2},\left( \frac{n}{\sigma^2} + \frac{1}{\sigma_0^2} \right)^{-1} \right) </math> | 6 = The family of normal distributions not only forms an [[exponential family]] (EF), but in fact forms a [[natural exponential family]] (NEF) with quadratic [[variance function]] ([[NEF-QVF]]). Many properties of normal distributions generalize to properties of NEF-QVF distributions, NEF distributions, or EF distributions generally. NEF-QVF distributions comprises 6 families, including Poisson, Gamma, binomial, and negative binomial distributions, while many of the common families studied in probability and statistics are NEF or EF. | 7 = In [[information geometry]], the family of normal distributions forms a [[statistical manifold]] with [[constant curvature]] {{tmath|-1}}. The same family is [[flat manifold|flat]] with respect to the (±1)-connections <math display=inline>\nabla^{(e)}</math> and <math display=inline>\nabla^{(m)}</math>.<ref>{{harvtxt |Amari |Nagaoka |2000 }}</ref> | 8 = If <math display=inline>X_1, \dots, X_n</math> are distributed according to <math display=inline>N(0, \sigma^2)</math>, then <math display=inline>E[\max_i X_i ] \leq \sigma\sqrt{2\ln n}</math>. Note that there is no assumption of independence.<ref>{{Cite web |title=Expectation of the maximum of gaussian random variables |url=https://math.stackexchange.com/a/89147 |access-date=2024-04-07 |website=Mathematics Stack Exchange |language=en}}</ref> }}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)