Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Gaussian function
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Mathematical function}} {{Redirect|Gaussian curve|the band|Gaussian Curve (band)}} {{more citations needed|date=August 2009}} In [[mathematics]], a '''Gaussian function''', often simply referred to as a '''Gaussian''', is a [[function (mathematics)|function]] of the base form <math display="block">f(x) = \exp (-x^2)</math> and with parametric extension <math display="block">f(x) = a \exp\left( -\frac{(x - b)^2}{2c^2} \right)</math> for arbitrary [[real number|real]] constants {{mvar|a}}, {{mvar|b}} and non-zero {{mvar|c}}. It is named after the mathematician [[Carl Friedrich Gauss]]. The [[graph of a function|graph]] of a Gaussian is a characteristic symmetric "[[Normal distribution|bell curve]]" shape. The parameter {{mvar|a}} is the height of the curve's peak, {{mvar|b}} is the position of the center of the peak, and {{mvar|c}} (the [[standard deviation]], sometimes called the Gaussian [[Root mean square|RMS]] width) controls the width of the "bell". Gaussian functions are often used to represent the [[probability density function]] of a [[normal distribution|normally distributed]] [[random variable]] with [[expected value]] {{math|1=<var>μ</var> = <var>b</var>}} and [[variance]] {{math|1=<var>σ</var>{{sup|2}} = <var>c</var>{{sup|2}}}}. In this case, the Gaussian is of the form<ref>{{Cite book |last=Squires |first=G. L. |url=https://www.cambridge.org/core/product/identifier/9781139164498/type/book |title=Practical Physics |date=2001-08-30 |publisher=Cambridge University Press |isbn=978-0-521-77940-1 |edition=4 |doi=10.1017/cbo9781139164498}}</ref> <math display="block">g(x) = \frac{1}{\sigma\sqrt{2\pi}} \exp\left( -\frac{1}{2} \frac{(x - \mu)^2}{\sigma^2} \right).</math> Gaussian functions are widely used in [[statistics]] to describe the [[normal distribution]]s, in [[signal processing]] to define [[Gaussian filter]]s, in [[image processing]] where two-dimensional Gaussians are used for [[Gaussian blur]]s, and in mathematics to solve [[heat equation]]s and [[diffusion equation]]s and to define the [[Weierstrass transform]]. They are also abundantly used in [[quantum chemistry]] to form [[Basis set (chemistry)|basis sets]]. == Properties == Gaussian functions arise by composing the [[exponential function]] with a [[Concave function|concave]] [[quadratic function]]:<math display="block">f(x) = \exp(\alpha x^2 + \beta x + \gamma),</math>where * <math>\alpha = -1/2c^2,</math> * <math>\beta = b/c^2,</math> * <math>\gamma = \ln a-(b^2 / 2c^2).</math> (Note: <math>a = 1/(\sigma\sqrt{2\pi}) </math> in <math> \ln a </math>, not to be confused with <math>\alpha = -1/2c^2</math>) The Gaussian functions are thus those functions whose [[logarithm]] is a concave quadratic function. The parameter {{mvar|c}} is related to the [[full width at half maximum]] (FWHM) of the peak according to <math display="block">\text{FWHM} = 2 \sqrt{2 \ln 2}\,c \approx 2.35482\,c.</math> The function may then be expressed in terms of the FWHM, represented by {{mvar|w}}: <math display="block">f(x) = a e^{-4 (\ln 2) (x - b)^2 / w^2}.</math> Alternatively, the parameter {{mvar|c}} can be interpreted by saying that the two [[inflection point]]s of the function occur at {{math|1=<var>x</var> = <var>b</var> ± <var>c</var>}}. The ''full width at tenth of maximum'' (FWTM) for a Gaussian could be of interest and is <math display="block">\text{FWTM} = 2 \sqrt{2 \ln 10}\,c \approx 4.29193\,c.</math> Gaussian functions are [[analytic function|analytic]], and their [[limit (mathematics)|limit]] as {{math|<var>x</var> → ∞}} is 0 (for the above case of {{math|1=<var>b</var> = 0}}). Gaussian functions are among those functions that are [[Elementary function (differential algebra)|elementary]] but lack elementary [[antiderivative]]s; the [[integral]] of the Gaussian function is the [[error function]]: <math display="block">\int e^{-x^2} \,dx = \frac{\sqrt\pi}{2} \operatorname{erf} x + C.</math> Nonetheless, their improper integrals over the whole real line can be evaluated exactly, using the [[Gaussian integral]] <math display="block">\int_{-\infty}^\infty e^{-x^2} \,dx = \sqrt{\pi},</math> and one obtains <math display="block">\int_{-\infty}^\infty a e^{-(x - b)^2 / (2c^2)} \,dx = ac \cdot \sqrt{2\pi}.</math> [[Image:Normal Distribution PDF.svg|thumb|360px|right|[[Normalizing constant|Normalized]] Gaussian curves with [[expected value]] {{mvar|μ}} and [[variance]] {{math|<var>σ</var>{{sup|2}}}}. The corresponding parameters are <math display="inline">a = \tfrac{1}{\sigma\sqrt{2\pi}}</math>, {{math|1=<var>b</var> = <var>μ</var>}} and {{math|1=<var>c</var> = <var>σ</var>}}.]] This integral is 1 if and only if <math display="inline">a = \tfrac{1}{c\sqrt{2\pi}}</math> (the [[normalizing constant]]), and in this case the Gaussian is the [[probability density function]] of a [[normal distribution|normally distributed]] [[random variable]] with [[expected value]] {{math|1=<var>μ</var> = <var>b</var>}} and [[variance]] {{math|1=<var>σ</var>{{sup|2}} = <var>c</var>{{sup|2}}}}: <math display="block">g(x) = \frac{1}{\sigma\sqrt{2\pi}} \exp\left(\frac{-(x - \mu)^2}{2\sigma^2} \right).</math> These Gaussians are plotted in the accompanying figure. The product of two Gaussian functions is a Gaussian, and the [[convolution]] of two Gaussian functions is also a Gaussian, with variance being the sum of the original variances: <math>c^2 = c_1^2 + c_2^2</math>. The product of two Gaussian probability density functions (PDFs), though, is not in general a Gaussian PDF. The Fourier [[Fourier transform#Uncertainty principle|uncertainty principle]] becomes an equality if and only if (modulated) Gaussian functions are considered.<ref>{{cite journal | last1=Folland | first1=Gerald B. | last2=Sitaram | first2=Alladi | title=The uncertainty principle: A mathematical survey | journal=The Journal of Fourier Analysis and Applications | volume=3 | issue=3 | date=1997 | issn=1069-5869 | doi=10.1007/BF02649110 | pages=207–238| bibcode=1997JFAA....3..207F }}</ref> Taking the [[Fourier transform#Angular frequency (ω)|Fourier transform (unitary, angular-frequency convention)]] of a Gaussian function with parameters {{math|1=<var>a</var> = 1}}, {{math|1=<var>b</var> = 0}} and {{math|<var>c</var>}} yields another Gaussian function, with parameters <math>c</math>, {{math|1=<var>b</var> = 0}} and <math>1/c</math>.<ref>{{cite web |last=Weisstein|first=Eric W. |title=Fourier Transform – Gaussian |url=http://mathworld.wolfram.com/FourierTransformGaussian.html |publisher=[[MathWorld]] |access-date=19 December 2013 }}</ref> So in particular the Gaussian functions with {{math|1=<var>b</var> = 0}} and <math>c = 1</math> are kept fixed by the Fourier transform (they are [[eigenfunction]]s of the Fourier transform with eigenvalue 1). <!-- The way the Fourier transform is currently defined in its article (with pi in the exponent, also the way that I prefer), the Gaussian must also have a pi in its exponent. ~~~~ --> A physical realization is that of the [[Fraunhofer diffraction#Diffraction by an aperture with a Gaussian profile|diffraction pattern]]: for example, a [[photographic slide]] whose [[transmittance]] has a Gaussian variation is also a Gaussian function. <!-- Using [[periodic summation]] and [[discretization]] you can construct vectors from the Gaussian function, that behave similarly under the [[Discrete Fourier transform]]. Comparing the zeroth coefficient of the Discrete Fourier transform of such a vector with the periodic summation and discretization of the Continuous Fourier transform of the Gaussian yields the interesting identity: --> The fact that the Gaussian function is an eigenfunction of the continuous Fourier transform allows us to derive the following interesting{{clarify|date=August 2016}} identity from the [[Poisson summation formula]]: <math display="block">\sum_{k\in\Z} \exp\left(-\pi \cdot \left(\frac{k}{c}\right)^2\right) = c \cdot \sum_{k\in\Z} \exp\left(-\pi \cdot (kc)^2\right).</math> ==Integral of a Gaussian function== The integral of an arbitrary Gaussian function is<math display="block">\int_{-\infty}^\infty a\,e^{-(x - b)^2/2c^2}\,dx = \ a \, |c| \, \sqrt{2\pi}.</math> An alternative form is<math display="block">\int_{-\infty}^\infty k\,e^{-f x^2 + g x + h}\,dx = \int_{-\infty}^\infty k\,e^{-f \big(x - g/(2f)\big)^2 + g^2/(4f) + h}\,dx = k\,\sqrt{\frac{\pi}{f}}\,\exp\left(\frac{g^2}{4f} + h\right),</math> where ''f'' must be strictly positive for the integral to converge. ===Relation to standard Gaussian integral=== The integral <math display="block">\int_{-\infty}^\infty ae^{-(x - b)^2/2c^2}\,dx</math> for some [[real number|real]] constants ''a'', ''b'' and ''c'' > 0 can be calculated by putting it into the form of a [[Gaussian integral]]. First, the constant ''a'' can simply be factored out of the integral. Next, the variable of integration is changed from ''x'' to {{math|1=<var>y</var> = <var>x</var> − ''b''}}: <math display="block">a\int_{-\infty}^\infty e^{-y^2/2c^2}\,dy,</math> and then to <math>z = y/\sqrt{2 c^2}</math>: <math display="block">a\sqrt{2 c^2} \int_{-\infty}^\infty e^{-z^2}\,dz.</math> Then, using the [[Gaussian integral|Gaussian integral identity]] <math display="block">\int_{-\infty}^\infty e^{-z^2}\,dz = \sqrt{\pi},</math> we have <math display="block">\int_{-\infty}^\infty ae^{-(x-b)^2/2c^2}\,dx = a\sqrt{2\pi c^2}.</math> == Two-dimensional Gaussian function == [[File:Gaussian 2d surface.png|thumb|3d plot of a Gaussian function with a two-dimensional domain]] Base form: <math display="block">f(x,y) = \exp(-x^2-y^2)</math> In two dimensions, the power to which ''e'' is raised in the Gaussian function is any negative-definite quadratic form. Consequently, the [[level set]]s of the Gaussian will always be ellipses. A particular example of a two-dimensional Gaussian function is <!-- This makes the formula consistent with the 1d formula above --> <math display="block">f(x,y) = A \exp\left(-\left(\frac{(x - x_0)^2}{2\sigma_X^2} + \frac{(y - y_0)^2}{2\sigma_Y^2} \right)\right).</math> Here the coefficient ''A'' is the amplitude, ''x''<sub>0</sub>, ''y''<sub>0</sub> is the center, and ''σ''<sub>''x''</sub>, ''σ''<sub>''y''</sub> are the ''x'' and ''y'' spreads of the blob. The figure on the right was created using ''A'' = 1, ''x''<sub>0</sub> = 0, ''y''<sub>0</sub> = 0, ''σ''<sub>''x''</sub> = ''σ''<sub>''y''</sub> = 1. The volume under the Gaussian function is given by <math display="block">V = \int_{-\infty}^\infty \int_{-\infty}^\infty f(x, y)\,dx \,dy = 2 \pi A \sigma_X \sigma_Y.</math> In general, a two-dimensional elliptical Gaussian function is expressed as <math display="block">f(x, y) = A \exp\Big(-\big(a(x - x_0)^2 + 2b(x - x_0)(y - y_0) + c(y - y_0)^2 \big)\Big),</math> where the matrix <math display="block">\begin{bmatrix} a & b \\ b & c \end{bmatrix}</math> is [[positive-definite matrix|positive-definite]]. Using this formulation, the figure on the right can be created using {{math|1=''A'' = 1}}, {{math|1=(''x''<sub>0</sub>, ''y''<sub>0</sub>) = (0, 0)}}, {{math|1=''a'' = ''c'' = 1/2}}, {{math|1=''b'' = 0}}. === Meaning of parameters for the general equation === For the general form of the equation the coefficient ''A'' is the height of the peak and {{math|(''x''<sub>0</sub>, ''y''<sub>0</sub>)}} is the center of the blob. If we set <math display="block"> \begin{align} a &= \frac{\cos^2\theta}{2\sigma_X^2} + \frac{\sin^2\theta}{2\sigma_Y^2}, \\ b &= -\frac{\sin \theta \cos \theta}{2\sigma_X^2} + \frac{\sin \theta \cos \theta}{2\sigma_Y^2}, \\ c &= \frac{\sin^2\theta}{2\sigma_X^2} + \frac{\cos^2\theta}{2\sigma_Y^2}, \end{align} </math>then we rotate the blob by a positive, counter-clockwise angle <math>\theta</math> (for negative, clockwise rotation, invert the signs in the ''b'' coefficient).<ref>{{cite web |last1=Nawri |first1=Nikolai |title=Berechnung von Kovarianzellipsen |url=http://imkbemu.physik.uni-karlsruhe.de/~eisatlas/covariance_ellipses.pdf |access-date=14 August 2019 |url-status=dead |archive-url=https://web.archive.org/web/20190814081830/http://imkbemu.physik.uni-karlsruhe.de/~eisatlas/covariance_ellipses.pdf |archive-date=2019-08-14}}</ref> To get back the coefficients <math>\theta</math>, <math>\sigma_X</math> and <math>\sigma_Y</math> from <math>a</math>, <math>b</math> and <math>c</math> use <math display="block">\begin{align} \theta &= \frac{1}{2}\arctan\left(\frac{2b}{a-c}\right), \quad \theta \in [-45, 45], \\ \sigma_X^2 &= \frac{1}{2 (a \cdot \cos^2\theta + 2 b \cdot \cos\theta\sin\theta + c \cdot \sin^2\theta)}, \\ \sigma_Y^2 &= \frac{1}{2 (a \cdot \sin^2\theta - 2 b \cdot \cos\theta\sin\theta + c \cdot \cos^2\theta)}. \end{align}</math> Example rotations of Gaussian blobs can be seen in the following examples: {| | [[Image:Gaussian 2d 0 degrees.png|thumb|200px|<math>\theta = 0</math>]] | [[Image:Gaussian 2d 30 degrees.png|thumb|200px|<math>\theta = -\pi/6</math>]] | [[Image:Gaussian 2d 60 degrees.png|thumb|200px|<math>\theta = -\pi/3</math>]] |} Using the following [[GNU Octave|Octave]] code, one can easily see the effect of changing the parameters: <syntaxhighlight lang="octave"> A = 1; x0 = 0; y0 = 0; sigma_X = 1; sigma_Y = 2; [X, Y] = meshgrid(-5:.1:5, -5:.1:5); for theta = 0:pi/100:pi a = cos(theta)^2 / (2 * sigma_X^2) + sin(theta)^2 / (2 * sigma_Y^2); b = sin(2 * theta) / (4 * sigma_X^2) - sin(2 * theta) / (4 * sigma_Y^2); c = sin(theta)^2 / (2 * sigma_X^2) + cos(theta)^2 / (2 * sigma_Y^2); Z = A * exp(-(a * (X - x0).^2 + 2 * b * (X - x0) .* (Y - y0) + c * (Y - y0).^2)); surf(X, Y, Z); shading interp; view(-36, 36) waitforbuttonpress end </syntaxhighlight> Such functions are often used in [[image processing]] and in computational models of [[visual system]] function—see the articles on [[scale space]] and [[affine shape adaptation]]. Also see [[multivariate normal distribution]]. === Higher-order Gaussian or super-Gaussian function or generalized Gaussian function === A more general formulation of a Gaussian function with a flat-top and Gaussian fall-off can be taken by raising the content of the exponent to a power <math>P</math>: <math display="block">f(x) = A \exp\left(-\left(\frac{(x - x_0)^2}{2\sigma_X^2}\right)^P\right).</math> This function is known as a super-Gaussian function and is often used for Gaussian beam formulation.<ref>Parent, A., M. Morin, and P. Lavigne. "Propagation of super-Gaussian field distributions". ''[[Optical and Quantum Electronics]]'' 24.9 (1992): S1071–S1079.</ref> This function may also be expressed in terms of the [[full width at half maximum]] (FWHM), represented by {{mvar|w}}: <math display="block">f(x) = A \exp\left(-\ln 2\left(4\frac{(x - x_0)^2}{w^2}\right)^P\right).</math> In a two-dimensional formulation, a Gaussian function along <math>x</math> and <math>y</math> can be combined<ref>{{Cite web |url=http://www.aor.com/anonymous/pub/commands.pdf |title=GLAD optical software commands manual, Entry on GAUSSIAN command |date=2016-12-15 |website=Applied Optics Research}}</ref> with potentially different <math>P_X</math> and <math>P_Y</math> to form a rectangular Gaussian distribution: <math display="block">f(x, y) = A \exp\left(-\left(\frac{(x - x_0)^2}{2\sigma_X^2}\right)^{P_X} - \left(\frac{(y - y_0)^2}{2\sigma_Y^2}\right)^{P_Y}\right).</math> or an elliptical Gaussian distribution: <math display="block">f(x , y) = A \exp\left(-\left(\frac{(x - x_0)^2}{2\sigma_X^2} + \frac{(y - y_0)^2}{2\sigma_Y^2}\right)^P\right)</math> == Multi-dimensional Gaussian function == {{main|Multivariate normal distribution}} In an <math>n</math>-dimensional space a Gaussian function can be defined as <math display="block">f(x) = \exp(-x^\mathsf{T} C x),</math> where <math>x = \begin{bmatrix} x_1 & \cdots & x_n\end{bmatrix}</math> is a column of <math>n</math> coordinates, <math>C</math> is a [[positive-definite matrix|positive-definite]] <math>n \times n</math> matrix, and <math>{}^\mathsf{T}</math> denotes [[transpose|matrix transposition]]. The integral of this Gaussian function over the whole <math>n</math>-dimensional space is given as <math display="block">\int_{\R^n} \exp(-x^\mathsf{T} C x) \, dx = \sqrt{\frac{\pi^n}{\det C}}.</math> It can be easily calculated by diagonalizing the matrix <math>C</math> and changing the integration variables to the eigenvectors of <math>C</math>. More generally a shifted Gaussian function is defined as <math display="block">f(x) = \exp(-x^\mathsf{T} C x + s^\mathsf{T} x),</math> where <math>s = \begin{bmatrix} s_1 & \cdots & s_n\end{bmatrix}</math> is the shift vector and the matrix <math>C</math> can be assumed to be symmetric, <math>C^\mathsf{T} = C</math>, and positive-definite. The following integrals with this function can be calculated with the same technique: <math display="block">\int_{\R^n} e^{-x^\mathsf{T} C x + v^\mathsf{T}x} \, dx = \sqrt{\frac{\pi^n}{\det{C}}} \exp\left(\frac{1}{4} v^\mathsf{T} C^{-1} v\right) \equiv \mathcal{M}.</math> <math display="block">\int_{\mathbb{R}^n} e^{- x^\mathsf{T} C x + v^\mathsf{T} x} (a^\mathsf{T} x) \, dx = (a^T u) \cdot \mathcal{M}, \text{ where } u = \frac{1}{2} C^{-1} v.</math> <math display="block">\int_{\mathbb{R}^n} e^{- x^\mathsf{T} C x + v^\mathsf{T} x} (x^\mathsf{T} D x) \, dx = \left( u^\mathsf{T} D u + \frac{1}{2} \operatorname{tr} (D C^{-1}) \right) \cdot \mathcal{M}.</math> <math display="block">\begin{align} & \int_{\mathbb{R}^n} e^{- x^\mathsf{T} C' x + s'^\mathsf{T} x} \left( -\frac{\partial}{\partial x} \Lambda \frac{\partial}{\partial x} \right) e^{-x^\mathsf{T} C x + s^\mathsf{T} x} \, dx \\ & \qquad = \left( 2 \operatorname{tr}(C' \Lambda C B^{- 1}) + 4 u^\mathsf{T} C' \Lambda C u - 2 u^\mathsf{T} (C' \Lambda s + C \Lambda s') + s'^\mathsf{T} \Lambda s \right) \cdot \mathcal{M}, \end{align}</math> where <math display="inline">u = \frac{1}{2} B^{- 1} v,\ v = s + s',\ B = C + C'.</math> == Estimation of parameters == {{see also|Normal distribution#Estimation of parameters}} A number of fields such as [[Photometry (astronomy)|stellar photometry]], [[Gaussian beam]] characterization, and [[emission spectrum#Emission spectroscopy|emission/absorption line spectroscopy]] work with sampled Gaussian functions and need to accurately estimate the height, position, and width parameters of the function. There are three unknown parameters for a 1D Gaussian function (''a'', ''b'', ''c'') and five for a 2D Gaussian function <math>(A; x_0,y_0; \sigma_X,\sigma_Y)</math>. The most common method for estimating the Gaussian parameters is to take the logarithm of the data and [[polynomial fitting|fit a parabola]] to the resulting data set.<ref name="Caruana Searle Heller Shupack 1986 pp. 1162–1167">{{cite journal | last1=Caruana | first1=Richard A. | last2=Searle | first2=Roger B. | last3=Heller | first3=Thomas. | last4=Shupack | first4=Saul I. | title=Fast algorithm for the resolution of spectra | journal=Analytical Chemistry | publisher=American Chemical Society (ACS) | volume=58 | issue=6 | year=1986 | issn=0003-2700 | doi=10.1021/ac00297a041 | pages=1162–1167}}</ref><ref name="Guo">[https://dx.doi.org/10.1109/MSP.2011.941846 Hongwei Guo, "A simple algorithm for fitting a Gaussian function," IEEE Sign. Proc. Mag. 28(9): 134-137 (2011).]</ref> While this provides a simple [[curve fitting]] procedure, the resulting algorithm may be biased by excessively weighting small data values, which can produce large errors in the profile estimate. One can partially compensate for this problem through [[weighted least squares]] estimation, reducing the weight of small data values, but this too can be biased by allowing the tail of the Gaussian to dominate the fit. In order to remove the bias, one can instead use an [[iteratively reweighted least squares]] procedure, in which the weights are updated at each iteration.<ref name="Guo" /> It is also possible to perform [[non-linear regression]] directly on the data, without involving the [[logarithmic data transformation]]; for more options, see [[probability distribution fitting]]. === Parameter precision === Once one has an algorithm for estimating the Gaussian function parameters, it is also important to know how [[Accuracy and precision|precise]] those estimates are. Any [[least squares]] estimation algorithm can provide numerical estimates for the variance of each parameter (i.e., the variance of the estimated height, position, and width of the function). One can also use [[Cramér–Rao bound]] theory to obtain an analytical expression for the lower bound on the parameter variances, given certain assumptions about the data.<ref name="Hagen1">[https://dx.doi.org/10.1364/AO.46.005374 N. Hagen, M. Kupinski, and E. L. Dereniak, "Gaussian profile estimation in one dimension," Appl. Opt. 46:5374–5383 (2007)]</ref><ref name="Hagen2">[https://dx.doi.org/10.1364/AO.47.006842 N. Hagen and E. L. Dereniak, "Gaussian profile estimation in two dimensions," Appl. Opt. 47:6842–6851 (2008)]</ref> # The noise in the measured profile is either [[Independent and identically-distributed random variables|i.i.d.]] Gaussian, or the noise is [[Poisson distribution|Poisson-distributed]]. # The spacing between each sampling (i.e. the distance between pixels measuring the data) is uniform. # The peak is "well-sampled", so that less than 10% of the area or volume under the peak (area if a 1D Gaussian, volume if a 2D Gaussian) lies outside the measurement region. # The width of the peak is much larger than the distance between sample locations (i.e. the detector pixels must be at least 5 times smaller than the Gaussian FWHM). When these assumptions are satisfied, the following [[covariance matrix]] '''K''' applies for the 1D profile parameters <math>a</math>, <math>b</math>, and <math>c</math> under i.i.d. Gaussian noise and under Poisson noise:<ref name="Hagen1" /> <math display="block"> \mathbf{K}_{\text{Gauss}} = \frac{\sigma^2}{\sqrt{\pi} \delta_X Q^2} \begin{pmatrix} \frac{3}{2c} &0 &\frac{-1}{a} \\ 0 &\frac{2c}{a^2} &0 \\ \frac{-1}{a} &0 &\frac{2c}{a^2} \end{pmatrix} \ , \qquad \mathbf{K}_\text{Poiss} = \frac{1}{\sqrt{2 \pi}} \begin{pmatrix} \frac{3a}{2c} &0 &-\frac{1}{2} \\ 0 &\frac{c}{a} &0 \\ -\frac{1}{2} &0 &\frac{c}{2a} \end{pmatrix} \ ,</math> where <math>\delta_X</math> is the width of the pixels used to sample the function, <math>Q</math> is the quantum efficiency of the detector, and <math>\sigma</math> indicates the standard deviation of the measurement noise. Thus, the individual variances for the parameters are, in the Gaussian noise case, <math display="block">\begin{align} \operatorname{var} (a) &= \frac{3 \sigma^2}{2 \sqrt{\pi} \, \delta_X Q^2 c} \\ \operatorname{var} (b) &= \frac{2 \sigma^2 c}{\delta_X \sqrt{\pi} \, Q^2 a^2} \\ \operatorname{var} (c) &= \frac{2 \sigma^2 c}{\delta_X \sqrt{\pi} \, Q^2 a^2} \end{align}</math> and in the Poisson noise case, <math display="block">\begin{align} \operatorname{var} (a) &= \frac{3a}{2 \sqrt{2 \pi} \, c} \\ \operatorname{var} (b) &= \frac{c}{\sqrt{2 \pi} \, a} \\ \operatorname{var} (c) &= \frac{c}{2 \sqrt{2 \pi} \, a}. \end{align} </math> For the 2D profile parameters giving the amplitude <math>A</math>, position <math>(x_0,y_0)</math>, and width <math>(\sigma_X,\sigma_Y)</math> of the profile, the following covariance matrices apply:<ref name="Hagen2" /> <math display="block">\begin{align} \mathbf{K}_\text{Gauss} = \frac{\sigma^2}{\pi \delta_X \delta_Y Q^2} & \begin{pmatrix} \frac{2}{\sigma_X \sigma_Y} &0 &0 &\frac{-1}{A \sigma_Y} &\frac{-1}{A \sigma_X} \\ 0 &\frac{2 \sigma_X}{A^2 \sigma_Y} &0 &0 &0 \\ 0 &0 &\frac{2 \sigma_Y}{A^2 \sigma_X} &0 &0 \\ \frac{-1}{A \sigma_y} &0 &0 &\frac{2 \sigma_X}{A^2 \sigma_y} &0 \\ \frac{-1}{A \sigma_X} &0 &0 &0 &\frac{2 \sigma_Y}{A^2 \sigma_X} \end{pmatrix} \\[6pt] \mathbf{K}_{\operatorname{Poisson}} = \frac{1}{2 \pi} & \begin{pmatrix} \frac{3A}{\sigma_X \sigma_Y} &0 &0 &\frac{-1}{\sigma_Y} &\frac{-1}{\sigma_X} \\ 0 & \frac{\sigma_X}{A \sigma_Y} &0 &0 &0 \\ 0 &0 &\frac{\sigma_Y}{A \sigma_X} &0 &0 \\ \frac{-1}{\sigma_Y} &0 &0 &\frac{2 \sigma_X}{3A \sigma_Y} &\frac{1}{3A} \\ \frac{-1}{\sigma_X} &0 &0 &\frac{1}{3A} &\frac{2 \sigma_Y}{3A \sigma_X} \end{pmatrix}. \end{align}</math> where the individual parameter variances are given by the diagonal elements of the covariance matrix. == Discrete Gaussian == {{main|Discrete Gaussian kernel}} [[File:Discrete Gaussian kernel.svg|thumb|The [[discrete Gaussian kernel]] (solid), compared with the [[sampled Gaussian kernel]] (dashed) for scales <math>t = 0.5,1,2,4.</math>]] One may ask for a discrete analog to the Gaussian; this is necessary in discrete applications, particularly [[digital signal processing]]. A simple answer is to sample the continuous Gaussian, yielding the [[sampled Gaussian kernel]]. However, this discrete function does not have the discrete analogs of the properties of the continuous function, and can lead to undesired effects, as described in the article [[scale space implementation]]. An alternative approach is to use the [[discrete Gaussian kernel]]:<ref name="lin90">[http://kth.diva-portal.org/smash/record.jsf?pid=diva2%3A472968&dswid=-3163 Lindeberg, T., "Scale-space for discrete signals," PAMI(12), No. 3, March 1990, pp. 234–254.]</ref> <math display="block">T(n, t) = e^{-t} I_n(t)</math> where <math>I_n(t)</math> denotes the [[modified Bessel function]]s of integer order. This is the discrete analog of the continuous Gaussian in that it is the solution to the discrete [[diffusion equation]] (discrete space, continuous time), just as the continuous Gaussian is the solution to the continuous diffusion equation.<ref name="lin90"/><ref>Campbell, J, 2007, ''[https://dx.doi.org/10.1016/j.tpb.2007.08.001 The SMM model as a boundary value problem using the discrete diffusion equation]'', Theor Popul Biol. 2007 Dec;72(4):539–46.</ref> ==Applications== Gaussian functions appear in many contexts in the [[natural sciences]], the [[social sciences]], [[mathematics]], and [[engineering]]. Some examples include: * In [[statistics]] and [[probability theory]], Gaussian functions appear as the density function of the [[normal distribution]], which is a limiting [[probability distribution]] of complicated sums, according to the [[central limit theorem]]. * Gaussian functions are the [[Green's function]] for the (homogeneous and isotropic) [[diffusion equation]] (and to the [[heat equation]], which is the same thing), a [[partial differential equation]] that describes the time evolution of a mass-density under [[diffusion]]. Specifically, if the mass-density at time ''t''=0 is given by a [[Dirac delta]], which essentially means that the mass is initially concentrated in a single point, then the mass-distribution at time ''t'' will be given by a Gaussian function, with the parameter '''''a''''' being linearly related to 1/{{radic|''t''}} and '''''c''''' being linearly related to {{radic|''t''}}; this time-varying Gaussian is described by the [[heat kernel]]. More generally, if the initial mass-density is φ(''x''), then the mass-density at later times is obtained by taking the [[convolution]] of φ with a Gaussian function. The convolution of a function with a Gaussian is also known as a [[Weierstrass transform]]. * A Gaussian function is the [[wave function]] of the [[ground state]] of the [[quantum harmonic oscillator]]. * The [[molecular orbital]]s used in [[computational chemistry]] can be [[linear combination]]s of Gaussian functions called [[Gaussian orbital]]s (see also [[basis set (chemistry)]]). * Mathematically, the [[derivative]]s of the Gaussian function can be represented using [[Hermite functions]]. For unit variance, the ''n''-th derivative of the Gaussian is the Gaussian function itself multiplied by the ''n''-th [[Hermite polynomial]], up to scale. * Consequently, Gaussian functions are also associated with the [[vacuum state]] in [[quantum field theory]]. * [[Gaussian beam]]s are used in optical systems, microwave systems and lasers. * In [[scale space]] representation, Gaussian functions are used as smoothing kernels for generating multi-scale representations in [[computer vision]] and [[image processing]]. Specifically, derivatives of Gaussians ([[Hermite functions]]) are used as a basis for defining a large number of types of visual operations. * Gaussian functions are used to define some types of [[artificial neural network]]s. * In [[fluorescence microscopy]] a 2D Gaussian function is used to approximate the [[Airy disk]], describing the intensity distribution produced by a [[point source]]. * In [[signal processing]] they serve to define [[Gaussian filter]]s, such as in [[image processing]] where 2D Gaussians are used for [[Gaussian blur]]s. In [[digital signal processing]], one uses a [[discrete Gaussian kernel]], which may be approximated by the [[Binomial coefficient]]<ref>Haddad, R.A. and Akansu, A.N., 1991, ''[https://doi.org/10.1109/78.80892 A Class of Fast Gaussian Binomial Filters for Speech and Image processing]'', IEEE Trans. on Signal Processing, 39-3: 723–727</ref> or sampling a Gaussian. * In [[geostatistics]] they have been used for understanding the variability between the patterns of a complex [[training image]]. They are used with kernel methods to cluster the patterns in the feature space.<ref>Honarkhah, M and Caers, J, 2010, ''[https://dx.doi.org/10.1007/s11004-010-9276-7 Stochastic Simulation of Patterns Using Distance-Based Pattern Modeling]'', Mathematical Geosciences, 42: 487–517</ref> ==See also== *[[Bell-shaped function]] *[[Cauchy distribution]] *[[Normal distribution]] *[[Radial basis function kernel]] == References == {{Reflist}} ==Further reading== * {{cite book | last=Haberman | first=Richard | title=Applied Partial Differential Equations | publisher=PEARSON | publication-place=Boston | date=2013 | isbn=978-0-321-79705-6|chapter = 10.3.3 Inverse Fourier transform of a Gaussian}} ==External links== * [http://mathworld.wolfram.com/GaussianFunction.html Mathworld, includes a proof for the relations between c and FWHM] * {{MathPages|id=home/kmath045/kmath045|title=Integrating The Bell Curve}} * [https://github.com/frecker/gaussian-distribution/ Haskell, Erlang and Perl implementation of Gaussian distribution] * [https://upload.wikimedia.org/wikipedia/commons/a/a2/Cumulative_function_n_dimensional_Gaussians_12.2013.pdf Bensimhoun Michael, ''N''-Dimensional Cumulative Function, And Other Useful Facts About Gaussians and Normal Densities (2009)] *[https://github.com/dwaithe/generalMacros/tree/master/gaussian_fitting Code for fitting Gaussians in ImageJ and Fiji.] [[Category:Gaussian function| ]] [[Category:Exponentials]] [[Category:Articles containing proofs]] [[Category:Articles with example MATLAB/Octave code]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Clarify
(
edit
)
Template:Main
(
edit
)
Template:Math
(
edit
)
Template:MathPages
(
edit
)
Template:More citations needed
(
edit
)
Template:Mvar
(
edit
)
Template:Radic
(
edit
)
Template:Redirect
(
edit
)
Template:Reflist
(
edit
)
Template:See also
(
edit
)
Template:Short description
(
edit
)