Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Normal distribution
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Properties == The normal distribution is the only distribution whose [[cumulant]]s beyond the first two (i.e., other than the mean and [[variance]]) are zero. It is also the continuous distribution with the [[maximum entropy probability distribution|maximum entropy]] for a specified mean and variance.{{sfnp|Cover|Thomas|2006|p=254}}<ref>{{cite journal|last1=Park|first1=Sung Y.|last2=Bera|first2=Anil K.|year=2009|title=Maximum Entropy Autoregressive Conditional Heteroskedasticity Model|journal=Journal of Econometrics|pages=219–230|url=http://www.wise.xmu.edu.cn/Master/Download/..%5C..%5CUploadFiles%5Cpaper-masterdownload%5C2009519932327055475115776.pdf|access-date=2011-06-02|doi=10.1016/j.jeconom.2008.12.014|volume=150|issue=2|citeseerx=10.1.1.511.9750|archive-date=March 7, 2016|archive-url=https://web.archive.org/web/20160307144515/http://wise.xmu.edu.cn/uploadfiles/paper-masterdownload/2009519932327055475115776.pdf|url-status=dead}}</ref> Geary has shown, assuming that the mean and variance are finite, that the normal distribution is the only distribution where the mean and variance calculated from a set of independent draws are independent of each other.<ref name="Geary RC">Geary RC(1936) The distribution of the "Student's ratio for the non-normal samples". Supplement to the Journal of the Royal Statistical Society 3 (2): 178–184</ref><ref>{{Cite Q|Q55897617|author1=Lukacs, Eugene|author-link1=Eugene Lukacs}}</ref> The normal distribution is a subclass of the [[elliptical distribution]]s. The normal distribution is [[Symmetric distribution|symmetric]] about its mean, and is non-zero over the entire real line. As such it may not be a suitable model for variables that are inherently positive or strongly skewed, such as the [[weight]] of a person or the price of a [[share (finance)|share]]. Such variables may be better described by other distributions, such as the [[log-normal distribution]] or the [[Pareto distribution]]. The value of the normal density is practically zero when the value {{tmath|x}} lies more than a few [[standard deviation]]s away from the mean (e.g., a spread of three standard deviations covers all but 0.27% of the total distribution). Therefore, it may not be an appropriate model when one expects a significant fraction of [[outlier]]s—values that lie many standard deviations away from the mean—and least squares and other [[statistical inference]] methods that are optimal for normally distributed variables often become highly unreliable when applied to such data. In those cases, a more [[heavy-tailed]] distribution should be assumed and the appropriate [[robust statistics|robust statistical inference]] methods applied. The Gaussian distribution belongs to the family of [[stable distribution]]s which are the attractors of sums of [[independent, identically distributed]] distributions whether or not the mean or variance is finite. Except for the Gaussian which is a limiting case, all stable distributions have heavy tails and infinite variance. It is one of the few distributions that are stable and that have probability density functions that can be expressed analytically, the others being the [[Cauchy distribution]] and the [[Lévy distribution]]. === Symmetries and derivatives === The normal distribution with density <math display=inline>f(x)</math> (mean {{tmath|\mu}} and variance <math display=inline>\sigma^2 > 0</math>) has the following properties: * It is symmetric around the point <math display=inline>x=\mu,</math> which is at the same time the [[mode (statistics)|mode]], the [[median]] and the [[mean]] of the distribution.<ref name="Patel">{{harvtxt |Patel |Read |1996 |loc=[2.1.4] }}</ref> * It is [[unimodal]]: its first [[derivative]] is positive for <math display=inline>x<\mu,</math> negative for <math display=inline>x>\mu,</math> and zero only at <math display=inline>x=\mu.</math> * The area bounded by the curve and the {{tmath|x}}-axis is unity (i.e. equal to one). * Its first derivative is <math display=inline>f'(x)=-\frac{x-\mu}{\sigma^2} f(x).</math> * Its second derivative is <math display=inline>f''(x) = \frac{(x-\mu)^2 - \sigma^2}{\sigma^4} f(x).</math> * Its density has two [[inflection point]]s (where the second derivative of {{tmath|f}} is zero and changes sign), located one standard deviation away from the mean, namely at <math display=inline>x=\mu-\sigma</math> and <math display=inline>x=\mu+\sigma.</math><ref name="Patel" /> * Its density is [[logarithmically concave function|log-concave]].<ref name="Patel" /> * Its density is infinitely [[differentiable]], indeed [[supersmooth]] of order 2.<ref>{{harvtxt |Fan |1991 |p=1258 }}</ref> Furthermore, the density {{tmath|\varphi}} of the standard normal distribution (i.e. <math display=inline>\mu=0</math> and <math display=inline>\sigma=1</math>) also has the following properties: * Its first derivative is <math display=inline>\varphi'(x)=-x\varphi(x).</math> * Its second derivative is <math display=inline>\varphi''(x)=(x^2-1)\varphi(x)</math> * More generally, its {{mvar|n}}th derivative is <math display=inline>\varphi^{(n)}(x) = (-1)^n\operatorname{He}_n(x)\varphi(x),</math> where <math display=inline>\operatorname{He}_n(x)</math> is the {{mvar|n}}th (probabilist) [[Hermite polynomial]].<ref>{{harvtxt |Patel |Read |1996 |loc=[2.1.8] }}</ref> * The probability that a normally distributed variable {{tmath|X}} with known {{tmath|\mu}} and <math display=inline>\sigma^2</math> is in a particular set, can be calculated by using the fact that the fraction <math display=inline>Z = (X-\mu)/\sigma</math> has a standard normal distribution. === Moments === {{See also|List of integrals of Gaussian functions}} The plain and absolute [[moment (mathematics)|moments]] of a variable {{tmath|X}} are the expected values of <math display=inline>X^p</math> and <math display=inline>|X|^p</math>, respectively. If the expected value {{tmath|\mu}} of {{tmath|X}} is zero, these parameters are called ''central moments;'' otherwise, these parameters are called ''non-central moments.'' Usually we are interested only in moments with integer order {{tmath|p}}. If {{tmath|X}} has a normal distribution, the non-central moments exist and are finite for any {{tmath|p}} whose real part is greater than −1. For any non-negative integer {{tmath|p}}, the plain central moments are:<ref>{{cite book|last1=Papoulis|first1=Athanasios|title=Probability, Random Variables and Stochastic Processes|page=148|edition=4th}}</ref> <math display=block> \operatorname{E}\left[(X-\mu)^p\right] = \begin{cases} 0 & \text{if }p\text{ is odd,} \\ \sigma^p (p-1)!! & \text{if }p\text{ is even.} \end{cases} </math> Here <math display=inline>n!!</math> denotes the [[double factorial]], that is, the product of all numbers from {{tmath|n}} to 1 that have the same parity as <math display=inline>n.</math> The central absolute moments coincide with plain moments for all even orders, but are nonzero for odd orders. For any non-negative integer <math display=inline>p,</math> <math display=block>\begin{align} \operatorname{E}\left[|X - \mu|^p\right] &= \sigma^p (p-1)!! \cdot \begin{cases} \sqrt{\frac{2}{\pi}} & \text{if }p\text{ is odd} \\ 1 & \text{if }p\text{ is even} \end{cases} \\ &= \sigma^p \cdot \frac{2^{p/2}\Gamma\left(\frac{p+1} 2 \right)}{\sqrt\pi}. \end{align}</math> The last formula is valid also for any non-integer <math display=inline>p>-1.</math> When the mean <math display=inline>\mu \ne 0,</math> the plain and absolute moments can be expressed in terms of [[confluent hypergeometric function]]s <math display=inline>{}_1F_1</math> and <math display=inline>U.</math><ref>{{cite arXiv|last1=Winkelbauer|first1=Andreas|title=Moments and Absolute Moments of the Normal Distribution |date= 2012|class=math.ST |eprint=1209.4340}}</ref> <math display="block">\begin{align} \operatorname{E}\left[X^p\right] &= \sigma^p\cdot {\left(-i\sqrt 2\right)}^p \, U{\left(-\frac{p}{2}, \frac{1}{2}, -\frac{\mu^2}{2\sigma^2}\right)}, \\ \operatorname{E}\left[|X|^p \right] &= \sigma^p \cdot 2^{p/2} \frac {\Gamma{\left(\frac{1+p} 2\right)}}{\sqrt\pi} \, {}_1F_1{\left( -\frac{p}{2}, \frac{1}{2}, -\frac{\mu^2}{2\sigma^2} \right)}. \end{align}</math> These expressions remain valid even if {{tmath|p}} is not an integer. See also [[Hermite polynomials#"Negative variance"|generalized Hermite polynomials]]. {| class="wikitable" style="margin: auto;" |- ! Order !! Non-central moment, <math>\operatorname{E}\left[X^p\right]</math> !! Central moment, <math>\operatorname{E}\left[(X-\mu)^p\right]</math> |- | 1 | {{tmath|\mu}} | {{tmath|0}} |- | 2 | <math display=inline>\mu^2+\sigma^2</math> | <math display=inline>\sigma^2</math> |- | 3 | <math display=inline>\mu^3+3\mu\sigma^2</math> | {{tmath|0}} |- | 4 | <math display=inline>\mu^4+6\mu^2\sigma^2+3\sigma^4</math> | <math display=inline>3\sigma^4</math> |- | 5 | <math display=inline>\mu^5+10\mu^3\sigma^2+15\mu\sigma^4</math> | {{tmath|0}} |- | 6 | <math display=inline>\mu^6+15\mu^4\sigma^2+45\mu^2\sigma^4+15\sigma^6</math> | <math display=inline>15\sigma^6</math> |- | 7 | <math display=inline>\mu^7+21\mu^5\sigma^2+105\mu^3\sigma^4+105\mu\sigma^6</math> | {{tmath|0}} |- | 8 | <math display=inline>\mu^8+28\mu^6\sigma^2+210\mu^4\sigma^4+420\mu^2\sigma^6+105\sigma^8</math> | <math display=inline>105\sigma^8</math> |} The expectation of {{tmath|X}} conditioned on the event that {{tmath|X}} lies in an interval <math display=inline>[a,b]</math> is given by <math display=block>\operatorname{E}\left[X \mid a<X<b \right] = \mu - \sigma^2\frac{f(b)-f(a)}{F(b)-F(a)}\,,</math> where {{tmath|f}} and {{tmath|F}} respectively are the density and the cumulative distribution function of {{tmath|X}}. For <math display=inline>b=\infty</math> this is known as the [[inverse Mills ratio]]. Note that above, density {{tmath|f}} of {{tmath|X}} is used instead of standard normal density as in inverse Mills ratio, so here we have <math display=inline>\sigma^2</math> instead of {{tmath|\sigma}}. === Fourier transform and characteristic function === The [[Fourier transform]] of a normal density {{tmath|f}} with mean {{tmath|\mu}} and variance <math display=inline>\sigma^2</math> is<ref>{{harvtxt |Bryc |1995 |p=23 }}</ref> <math display=block> \hat f(t) = \int_{-\infty}^\infty f(x)e^{-itx} \, dx = e^{-i\mu t} e^{- \frac12 (\sigma t)^2}\,, </math> where {{tmath|i}} is the [[imaginary unit]]. If the mean <math display=inline>\mu=0</math>, the first factor is 1, and the Fourier transform is, apart from a constant factor, a normal density on the [[frequency domain]], with mean 0 and variance {{tmath|1/\sigma^2}}. In particular, the standard normal distribution {{tmath|\varphi}} is an [[Fourier transform#Eigenfunctions|eigenfunction]] of the Fourier transform. In probability theory, the Fourier transform of the probability distribution of a real-valued random variable {{tmath|X}} is closely connected to the [[characteristic function (probability theory)|characteristic function]] <math display=inline>\varphi_X(t)</math> of that variable, which is defined as the [[expected value]] of <math display=inline>e^{itX}</math>, as a function of the real variable {{tmath|t}} (the [[frequency]] parameter of the Fourier transform). This definition can be analytically extended to a complex-value variable {{tmath|t}}.<ref>{{harvtxt |Bryc |1995 |p=24 }}</ref> The relation between both is: <math display=block>\varphi_X(t) = \hat f(-t)\,.</math> === Moment- and cumulant-generating functions === The [[moment generating function]] of a real random variable {{tmath|X}} is the expected value of <math display=inline>e^{tX}</math>, as a function of the real parameter {{tmath|t}}. For a normal distribution with density {{tmath|f}}, mean {{tmath|\mu}} and variance <math display=inline>\sigma^2</math>, the moment generating function exists and is equal to <math display=block>M(t) = \operatorname{E}\left[e^{tX}\right] = \hat f(it) = e^{\mu t} e^{\sigma^2 t^2/2}\,.</math> For any {{tmath|k}}, the coefficient of {{tmath|t^k / k!}} in the moment generating function (expressed as an [[Generating function#Exponential generating function (EGF)|exponential power series]] in {{tmath|t}}) is the normal distribution's expected value {{tmath|\operatorname{E}[X^k]}}. The [[cumulant generating function]] is the logarithm of the moment generating function, namely <math display=block>g(t) = \ln M(t) = \mu t + \tfrac 12 \sigma^2 t^2\,.</math> The coefficients of this exponential power series define the cumulants, but because this is a quadratic polynomial in {{tmath|t}}, only the first two [[cumulant]]s are nonzero, namely the mean {{tmath|\mu}} and the variance {{tmath|\sigma^2}}. Some authors prefer to instead work with the [[characteristic function (probability theory)|characteristic function]] {{math|1=E[''e''{{sup|''itX''}}] = ''e''{{sup|''iμt'' − ''σ''{{sup|2}}''t''{{sup|2}}/2}}}} and {{math|1=ln E[''e''{{sup|''itX''}}] = ''iμt'' − {{sfrac|1|2}}''σ''{{sup|2}}''t''{{sup|2}}}}. === Stein operator and class === Within [[Stein's method]] the Stein operator and class of a random variable <math display=inline>X \sim \mathcal{N}(\mu, \sigma^2)</math> are <math display=inline>\mathcal{A}f(x) = \sigma^2 f'(x) - (x-\mu)f(x)</math> and <math display=inline>\mathcal{F}</math> the class of all absolutely continuous functions {{tmath|\textstyle f : \R \to \R}} such that {{tmath|\operatorname{E}[\vert f'(X)\vert] < \infty}}. === Zero-variance limit === In the [[limit (mathematics)|limit]] when <math display=inline>\sigma^2</math> tends to zero, the probability density <math display=inline>f(x)</math> eventually tends to zero at any <math display=inline>x\ne \mu</math>, but grows without limit if <math display=inline>x = \mu</math>, while its integral remains equal to 1. Therefore, the normal distribution cannot be defined as an ordinary [[function (mathematics)|function]] when {{tmath|1=\sigma^2 = 0}}. However, one can define the normal distribution with zero variance as a [[generalized function]]; specifically, as a [[Dirac delta function]] {{tmath|\delta}} translated by the mean {{tmath|\mu}}, that is <math display=inline>f(x)=\delta(x-\mu).</math> Its cumulative distribution function is then the [[Heaviside step function]] translated by the mean {{tmath|\mu}}, namely <math display=block>F(x) = \begin{cases} 0 & \text{if }x < \mu \\ 1 & \text{if }x \geq \mu\,. \end{cases} </math> === Maximum entropy === Of all probability distributions over the reals with a specified finite mean {{tmath|\mu}} and finite variance {{tmath|\sigma^2}}, the normal distribution <math display=inline>N(\mu,\sigma^2)</math> is the one with [[Maximum entropy probability distribution|maximum entropy]].{{sfnp|Cover|Thomas|2006|p=254}} To see this, let {{tmath|X}} be a [[continuous random variable]] with [[probability density]] {{tmath|f(x)}}. The entropy of {{tmath|X}} is defined as<ref>{{cite book|last1=Williams|first1=David|title=Weighing the odds : a course in probability and statistics|url=https://archive.org/details/weighingoddscour00will|url-access=limited|date=2001|publisher=Cambridge Univ. Press|location=Cambridge [u.a.]|isbn=978-0-521-00618-7|pages=[https://archive.org/details/weighingoddscour00will/page/n219 197]–199|edition=Reprinted.}}</ref><ref>{{cite book|author1=José M. Bernardo |author2=Adrian F. M. Smith|title=Bayesian theory|url=https://archive.org/details/bayesiantheory00bern_963|url-access=limited|date=2000|publisher=Wiley|location=Chichester [u.a.]|isbn=978-0-471-49464-5|pages=[https://archive.org/details/bayesiantheory00bern_963/page/n224 209], 366|edition=Reprint}}</ref><ref>O'Hagan, A. (1994) ''Kendall's Advanced Theory of statistics, Vol 2B, Bayesian Inference'', Edward Arnold. {{isbn|0-340-52922-9}} (Section 5.40)</ref> <math display=block> H(X) = - \int_{-\infty}^\infty f(x)\ln f(x)\, dx\,, </math> where <math display=inline>f(x)\log f(x)</math> is understood to be zero whenever {{tmath|1=f(x)=0}}. This functional can be maximized, subject to the constraints that the distribution is properly normalized and has a specified mean and variance, by using [[variational calculus]]. A function with three [[Lagrange multipliers]] is defined: <math display=block> L=-\int_{-\infty}^\infty f(x)\ln f(x)\,dx-\lambda_0\left(1-\int_{-\infty}^\infty f(x)\,dx\right)-\lambda_1\left(\mu-\int_{-\infty}^\infty f(x)x\,dx\right)-\lambda_2\left(\sigma^2-\int_{-\infty}^\infty f(x)(x-\mu)^2\,dx\right)\,. </math> At maximum entropy, a small variation <math display=inline>\delta f(x)</math> about <math display=inline>f(x)</math> will produce a variation <math display=inline>\delta L</math> about {{tmath|L}} which is equal to 0: <math display=block> 0=\delta L=\int_{-\infty}^\infty \delta f(x)\left(-\ln f(x) -1+\lambda_0+\lambda_1 x+\lambda_2(x-\mu)^2\right)\,dx\,. </math> Since this must hold for any small {{tmath|\delta f(x)}}, the factor multiplying {{tmath|\delta f(x)}} must be zero, and solving for {{tmath|f(x)}} yields: <math display=block>f(x)=\exp\left(-1+\lambda_0+\lambda_1 x+\lambda_2(x-\mu)^2\right)\,.</math> The Lagrange constraints that {{tmath|f(x)}} is properly normalized and has the specified mean and variance are satisfied if and only if {{tmath|\lambda_0}}, {{tmath|\lambda_1}}, and {{tmath|\lambda_2}} are chosen so that <math display=block> f(x)=\frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}\,. </math> The entropy of a normal distribution <math display=inline>X \sim N(\mu,\sigma^2)</math> is equal to <math display=block> H(X)=\tfrac{1}{2}(1+\ln 2\sigma^2\pi)\,, </math> which is independent of the mean {{tmath|\mu}}. === Other properties === {{ordered list | 1 = If the characteristic function <math display=inline>\phi_X</math> of some random variable {{tmath|X}} is of the form <math display=inline>\phi_X(t) = \exp Q(t)</math> in a neighborhood of zero, where <math display=inline>Q(t)</math> is a [[polynomial]], then the '''Marcinkiewicz theorem''' (named after [[Józef Marcinkiewicz]]) asserts that {{tmath|Q}} can be at most a quadratic polynomial, and therefore {{tmath|X}} is a normal random variable.<ref name="Bryc 1995 35" /> The consequence of this result is that the normal distribution is the only distribution with a finite number (two) of non-zero [[cumulant]]s. | 2 = If {{tmath|X}} and {{tmath|Y}} are [[jointly normal]] and [[uncorrelated]], then they are [[independence (probability theory)|independent]]. The requirement that {{tmath|X}} and {{tmath|Y}} should be ''jointly'' normal is essential; without it the property does not hold.<ref>[http://www.math.uiuc.edu/~r-ash/Stat/StatLec21-25.pdf UIUC, Lecture 21. ''The Multivariate Normal Distribution''], 21.6:"Individually Gaussian Versus Jointly Gaussian".</ref><ref>Edward L. Melnick and Aaron Tenenbein, "Misspecifications of the Normal Distribution", ''[[The American Statistician]]'', volume 36, number 4 November 1982, pages 372–373</ref><sup>[[Normally distributed and uncorrelated does not imply independent|[proof]]]</sup> For non-normal random variables uncorrelatedness does not imply independence. | 3 = The [[Kullback–Leibler divergence]] of one normal distribution <math display=inline>X_1 \sim N(\mu_1, \sigma^2_1)</math> from another <math display=inline>X_2 \sim N(\mu_2, \sigma^2_2)</math> is given by:<ref>{{cite web |url=http://www.allisons.org/ll/MML/KL/Normal/|title=Kullback Leibler (KL) Distance of Two Normal (Gaussian) Probability Distributions| website=Allisons.org |date=2007-12-05 |access-date=2017-03-03}}</ref> <math display=block> D_\mathrm{KL}( X_1 \parallel X_2 ) = \frac{(\mu_1 - \mu_2)^2}{2\sigma_2^2} + \frac{1}{2}\left( \frac{\sigma_1^2}{\sigma_2^2} - 1 - \ln\frac{\sigma_1^2}{\sigma_2^2} \right) </math> The [[Hellinger distance]] between the same distributions is equal to <math display=block> H^2(X_1,X_2) = 1 - \sqrt{\frac{2\sigma_1\sigma_2}{\sigma_1^2+\sigma_2^2}} \exp\left(-\frac{1}{4}\frac{(\mu_1-\mu_2)^2}{\sigma_1^2+\sigma_2^2}\right) </math> | 4 = The [[Fisher information matrix]] for a normal distribution w.r.t. {{tmath|\mu}} and <math display=inline>\sigma^2</math> is diagonal and takes the form <math display=block> \mathcal I (\mu, \sigma^2) = \begin{pmatrix} \frac{1}{\sigma^2} & 0 \\ 0 & \frac{1}{2\sigma^4} \end{pmatrix} </math> | 5 = The [[conjugate prior]] of the mean of a normal distribution is another normal distribution.<ref>{{cite web|url=http://www.cs.berkeley.edu/~jordan/courses/260-spring10/lectures/lecture5.pdf|title=Stat260: Bayesian Modeling and Inference: The Conjugate Prior for the Normal Distribution|first=Michael I.|last=Jordan|date=February 8, 2010}}</ref> Specifically, if <math display=inline>x_1, \ldots, x_n</math> are iid <math display=inline>\sim N(\mu, \sigma^2)</math> and the prior is <math display=inline>\mu \sim N(\mu_0 , \sigma^2_0)</math>, then the posterior distribution for the estimator of {{tmath|\mu}} will be <math display=block> \mu \mid x_1,\ldots,x_n \sim \mathcal{N}\left( \frac{\frac{\sigma^2}{n}\mu_0 + \sigma_0^2\bar{x}}{\frac{\sigma^2}{n}+\sigma_0^2},\left( \frac{n}{\sigma^2} + \frac{1}{\sigma_0^2} \right)^{-1} \right) </math> | 6 = The family of normal distributions not only forms an [[exponential family]] (EF), but in fact forms a [[natural exponential family]] (NEF) with quadratic [[variance function]] ([[NEF-QVF]]). Many properties of normal distributions generalize to properties of NEF-QVF distributions, NEF distributions, or EF distributions generally. NEF-QVF distributions comprises 6 families, including Poisson, Gamma, binomial, and negative binomial distributions, while many of the common families studied in probability and statistics are NEF or EF. | 7 = In [[information geometry]], the family of normal distributions forms a [[statistical manifold]] with [[constant curvature]] {{tmath|-1}}. The same family is [[flat manifold|flat]] with respect to the (±1)-connections <math display=inline>\nabla^{(e)}</math> and <math display=inline>\nabla^{(m)}</math>.<ref>{{harvtxt |Amari |Nagaoka |2000 }}</ref> | 8 = If <math display=inline>X_1, \dots, X_n</math> are distributed according to <math display=inline>N(0, \sigma^2)</math>, then <math display=inline>E[\max_i X_i ] \leq \sigma\sqrt{2\ln n}</math>. Note that there is no assumption of independence.<ref>{{Cite web |title=Expectation of the maximum of gaussian random variables |url=https://math.stackexchange.com/a/89147 |access-date=2024-04-07 |website=Mathematics Stack Exchange |language=en}}</ref> }}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)