Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Dirac delta function
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Representations== ===Nascent delta function=== The delta function can be viewed as the limit of a sequence of functions <math display="block">\delta (x) = \lim_{\varepsilon\to 0^+} \eta_\varepsilon(x), </math> where {{math|''η<sub>ε</sub>''(''x'')}} is sometimes called a '''nascent delta function'''{{anchor|nascent delta function}}. This limit is meant in a weak sense: either that {{NumBlk2|:|<math> \lim_{\varepsilon\to 0^+} \int_{-\infty}^\infty \eta_\varepsilon(x)f(x) \, dx = f(0) </math>|5}} for all [[continuous function|continuous]] functions {{mvar|f}} having [[compact support]], or that this limit holds for all [[smooth function|smooth]] functions {{mvar|f}} with compact support. The difference between these two slightly different modes of weak convergence is often subtle: the former is convergence in the [[vague topology]] of measures, and the latter is convergence in the sense of [[distribution (mathematics)|distributions]]. ====Approximations to the identity==== Typically a nascent delta function {{mvar|η<sub>ε</sub>}} can be constructed in the following manner. Let {{mvar|η}} be an absolutely integrable function on {{math|'''R'''}} of total integral {{math|1}}, and define <math display="block">\eta_\varepsilon(x) = \varepsilon^{-1} \eta \left (\frac{x}{\varepsilon} \right). </math> In {{mvar|n}} dimensions, one uses instead the scaling <math display="block">\eta_\varepsilon(x) = \varepsilon^{-n} \eta \left (\frac{x}{\varepsilon} \right). </math> Then a simple change of variables shows that {{mvar|η<sub>ε</sub>}} also has integral {{math|1}}. One may show that ({{EquationNote|5}}) holds for all continuous compactly supported functions {{mvar|f}},{{sfn|Stein|Weiss|1971|loc=Theorem 1.18}} and so {{mvar|η<sub>ε</sub>}} converges weakly to {{mvar|δ}} in the sense of measures. The {{mvar|η<sub>ε</sub>}} constructed in this way are known as an '''approximation to the identity'''.{{sfn|Rudin|1991|loc=§II.6.31}} This terminology is because the space {{math|''L''<sup>1</sup>('''R''')}} of absolutely integrable functions is closed under the operation of [[convolution]] of functions: {{math|''f'' ∗ ''g'' ∈ ''L''<sup>1</sup>('''R''')}} whenever {{mvar|f}} and {{mvar|g}} are in {{math|''L''<sup>1</sup>('''R''')}}. However, there is no identity in {{math|''L''<sup>1</sup>('''R''')}} for the convolution product: no element {{mvar|h}} such that {{math|1=''f'' ∗ ''h'' = ''f''}} for all {{mvar|f}}. Nevertheless, the sequence {{mvar|η<sub>ε</sub>}} does approximate such an identity in the sense that <math display="block">f*\eta_\varepsilon \to f \quad \text{as }\varepsilon\to 0.</math> This limit holds in the sense of [[mean convergence]] (convergence in {{math|''L''<sup>1</sup>}}). Further conditions on the {{mvar|η<sub>ε</sub>}}, for instance that it be a mollifier associated to a compactly supported function,<ref>More generally, one only needs {{math|1=''η'' = ''η''<sub>1</sub>}} to have an integrable radially symmetric decreasing rearrangement.</ref> are needed to ensure pointwise convergence [[almost everywhere]]. If the initial {{math|1=''η'' = ''η''<sub>1</sub>}} is itself smooth and compactly supported then the sequence is called a [[mollifier]]. The standard mollifier is obtained by choosing {{mvar|η}} to be a suitably normalized [[bump function]], for instance <math display="block">\eta(x) = \begin{cases} \frac{1}{I_n} \exp\Big( -\frac{1}{1-|x|^2} \Big) & \text{if } |x| < 1\\ 0 & \text{if } |x|\geq 1. \end{cases}</math> (<math>I_n</math> ensuring that the total integral is 1). In some situations such as [[numerical analysis]], a [[piecewise linear function|piecewise linear]] approximation to the identity is desirable. This can be obtained by taking {{math|''η''<sub>1</sub>}} to be a [[hat function]]. With this choice of {{math|''η''<sub>1</sub>}}, one has <math display="block"> \eta_\varepsilon(x) = \varepsilon^{-1}\max \left (1-\left|\frac{x}{\varepsilon}\right|,0 \right) </math> which are all continuous and compactly supported, although not smooth and so not a mollifier. ====Probabilistic considerations==== In the context of [[probability theory]], it is natural to impose the additional condition that the initial {{math|''η''<sub>1</sub>}} in an approximation to the identity should be positive, as such a function then represents a [[probability distribution]]. Convolution with a probability distribution is sometimes favorable because it does not result in [[overshoot (signal)|overshoot]] or undershoot, as the output is a [[convex combination]] of the input values, and thus falls between the maximum and minimum of the input function. Taking {{math|''η''<sub>1</sub>}} to be any probability distribution at all, and letting {{math|1=''η<sub>ε</sub>''(''x'') = ''η''<sub>1</sub>(''x''/''ε'')/''ε''}} as above will give rise to an approximation to the identity. In general this converges more rapidly to a delta function if, in addition, {{mvar|η}} has mean {{math|0}} and has small higher moments. For instance, if {{math|''η''<sub>1</sub>}} is the [[uniform distribution (continuous)|uniform distribution]] on {{nowrap|1=<math display="inline">\left[-\frac{1}{2},\frac{1}{2}\right]</math>,}} also known as the [[rectangular function]], then:{{sfn|Saichev|Woyczyński|1997|loc=§1.1 The "delta function" as viewed by a physicist and an engineer, p. 3}} <math display="block"> \eta_\varepsilon(x) = \frac{1}{\varepsilon}\operatorname{rect}\left(\frac{x}{\varepsilon}\right)= \begin{cases} \frac{1}{\varepsilon},&-\frac{\varepsilon}{2}<x<\frac{\varepsilon}{2}, \\ 0, &\text{otherwise}. \end{cases}</math> Another example is with the [[Wigner semicircle distribution]] <math display="block">\eta_\varepsilon(x)= \begin{cases} \frac{2}{\pi \varepsilon^2}\sqrt{\varepsilon^2 - x^2}, & -\varepsilon < x < \varepsilon, \\ 0, & \text{otherwise}. \end{cases}</math> This is continuous and compactly supported, but not a mollifier because it is not smooth. ====Semigroups==== Nascent delta functions often arise as convolution [[semigroup]]s.<ref>{{Cite book|last1=Milovanović|first1=Gradimir V.|url={{google books |plainurl=y |id=4U-5BQAAQBAJ}}|title=Analytic Number Theory, Approximation Theory, and Special Functions: In Honor of Hari M. Srivastava|last2=Rassias|first2=Michael Th|date=2014-07-08|publisher=Springer|isbn=978-1-4939-0258-3|language=en|page=[{{google books |plainurl=y |id=4U-5BQAAQBAJ|page=748 }} 748]}}</ref> This amounts to the further constraint that the convolution of {{mvar|η<sub>ε</sub>}} with {{mvar|η<sub>δ</sub>}} must satisfy <math display="block">\eta_\varepsilon * \eta_\delta = \eta_{\varepsilon+\delta}</math> for all {{math|1=''ε'', ''δ'' > 0}}. Convolution semigroups in {{math|''L''<sup>1</sup>}} that form a nascent delta function are always an approximation to the identity in the above sense, however the semigroup condition is quite a strong restriction. In practice, semigroups approximating the delta function arise as [[fundamental solution]]s or [[Green's function]]s to physically motivated [[elliptic partial differential equation|elliptic]] or [[parabolic partial differential equation|parabolic]] [[partial differential equations]]. In the context of [[applied mathematics]], semigroups arise as the output of a [[linear time-invariant system]]. Abstractly, if ''A'' is a linear operator acting on functions of ''x'', then a convolution semigroup arises by solving the [[initial value problem]] <math display="block">\begin{cases} \dfrac{\partial}{\partial t}\eta(t,x) = A\eta(t,x), \quad t>0 \\[5pt] \displaystyle\lim_{t\to 0^+} \eta(t,x) = \delta(x) \end{cases}</math> in which the limit is as usual understood in the weak sense. Setting {{math|1=''η<sub>ε</sub>''(''x'') = ''η''(''ε'', ''x'')}} gives the associated nascent delta function. Some examples of physically important convolution semigroups arising from such a fundamental solution include the following. =====The heat kernel===== The [[heat kernel]], defined by <math display="block">\eta_\varepsilon(x) = \frac{1}{\sqrt{2\pi\varepsilon}} \mathrm{e}^{-\frac{x^2}{2\varepsilon}}</math> represents the temperature in an infinite wire at time {{math|1=''t'' > 0}}, if a unit of heat energy is stored at the origin of the wire at time {{math|1=''t'' = 0}}. This semigroup evolves according to the one-dimensional [[heat equation]]: <math display="block">\frac{\partial u}{\partial t} = \frac{1}{2}\frac{\partial^2 u}{\partial x^2}.</math> In [[probability theory]], {{math|1=''η<sub>ε</sub>''(''x'')}} is a [[normal distribution]] of [[variance]] {{mvar|ε}} and mean {{math|0}}. It represents the [[probability density function|probability density]] at time {{math|1=''t'' = ''ε''}} of the position of a particle starting at the origin following a standard [[Brownian motion]]. In this context, the semigroup condition is then an expression of the [[Markov property]] of Brownian motion. In higher-dimensional Euclidean space {{math|'''R'''<sup>''n''</sup>}}, the heat kernel is <math display="block">\eta_\varepsilon = \frac{1}{(2\pi\varepsilon)^{n/2}}\mathrm{e}^{-\frac{x\cdot x}{2\varepsilon}},</math> and has the same physical interpretation, {{lang|la|[[mutatis mutandis]]}}. It also represents a nascent delta function in the sense that {{math|''η<sub>ε</sub>'' → ''δ''}} in the distribution sense as {{math|''ε'' → 0}}. =====The Poisson kernel===== The [[Poisson kernel]] <math display="block">\eta_\varepsilon(x) = \frac{1}{\pi}\mathrm{Im}\left\{\frac{1}{x-\mathrm{i}\varepsilon}\right\}=\frac{1}{\pi} \frac{\varepsilon}{\varepsilon^2 + x^2}=\frac{1}{2\pi}\int_{-\infty}^{\infty}\mathrm{e}^{\mathrm{i} \xi x-|\varepsilon \xi|}\,d\xi</math> is the fundamental solution of the [[Laplace equation]] in the upper half-plane.{{sfn|Stein|Weiss|1971|loc=§I.1}} It represents the [[electrostatic potential]] in a semi-infinite plate whose potential along the edge is held at fixed at the delta function. The Poisson kernel is also closely related to the [[Cauchy distribution]] and [[Kernel (statistics)#Kernel functions in common use|Epanechnikov and Gaussian kernel]] functions.<ref>{{Cite book|last=Mader|first=Heidy M.|url={{google books |plainurl=y |id=e5Y_RRPxdyYC}}|title=Statistics in Volcanology|date=2006|publisher=Geological Society of London|isbn=978-1-86239-208-3|language=en|editor-link=Heidy Mader|page=[{{google books |plainurl=y |id=e5Y_RRPxdyYC|page=81}} 81]}}</ref> This semigroup evolves according to the equation <math display="block">\frac{\partial u}{\partial t} = -\left (-\frac{\partial^2}{\partial x^2} \right)^{\frac{1}{2}}u(t,x)</math> where the operator is rigorously defined as the [[Fourier multiplier]] <math display="block">\mathcal{F}\left[\left(-\frac{\partial^2}{\partial x^2} \right)^{\frac{1}{2}}f\right](\xi) = |2\pi\xi|\mathcal{F}f(\xi).</math> ====Oscillatory integrals==== In areas of physics such as [[wave propagation]] and [[wave|wave mechanics]], the equations involved are [[hyperbolic partial differential equations|hyperbolic]] and so may have more singular solutions. As a result, the nascent delta functions that arise as fundamental solutions of the associated [[Cauchy problem]]s are generally [[oscillatory integral]]s. An example, which comes from a solution of the [[Euler–Tricomi equation]] of [[transonic]] [[gas dynamics]],{{sfn|Vallée|Soares|2004|loc=§7.2}} is the rescaled [[Airy function]] <math display="block">\varepsilon^{-1/3}\operatorname{Ai}\left (x\varepsilon^{-1/3} \right). </math> Although using the Fourier transform, it is easy to see that this generates a semigroup in some sense—it is not absolutely integrable and so cannot define a semigroup in the above strong sense. Many nascent delta functions constructed as oscillatory integrals only converge in the sense of distributions (an example is the [[Dirichlet kernel]] below), rather than in the sense of measures. Another example is the Cauchy problem for the [[wave equation]] in {{math|'''R'''<sup>1+1</sup>}}:{{sfn|Hörmander|1983|loc=§7.8}} <math display="block"> \begin{align} c^{-2}\frac{\partial^2u}{\partial t^2} - \Delta u &= 0\\ u=0,\quad \frac{\partial u}{\partial t} = \delta &\qquad \text{for }t=0. \end{align} </math> The solution {{mvar|u}} represents the displacement from equilibrium of an infinite elastic string, with an initial disturbance at the origin. Other approximations to the identity of this kind include the [[sinc function]] (used widely in electronics and telecommunications) <math display="block">\eta_\varepsilon(x)=\frac{1}{\pi x}\sin\left(\frac{x}{\varepsilon}\right)=\frac{1}{2\pi}\int_{-\frac{1}{\varepsilon}}^{\frac{1}{\varepsilon}} \cos(kx)\,dk </math> and the [[Bessel function]] <math display="block"> \eta_\varepsilon(x) = \frac{1}{\varepsilon}J_{\frac{1}{\varepsilon}} \left(\frac{x+1}{\varepsilon}\right). </math> ===Plane wave decomposition=== One approach to the study of a linear partial differential equation <math display="block">L[u]=f,</math> where {{mvar|L}} is a [[differential operator]] on {{math|'''R'''<sup>''n''</sup>}}, is to seek first a fundamental solution, which is a solution of the equation <math display="block">L[u]=\delta.</math> When {{mvar|L}} is particularly simple, this problem can often be resolved using the Fourier transform directly (as in the case of the Poisson kernel and heat kernel already mentioned). For more complicated operators, it is sometimes easier first to consider an equation of the form <math display="block">L[u]=h</math> where {{mvar|h}} is a [[plane wave]] function, meaning that it has the form <math display="block">h = h(x\cdot\xi)</math> for some vector {{mvar|ξ}}. Such an equation can be resolved (if the coefficients of {{mvar|L}} are [[analytic function]]s) by the [[Cauchy–Kovalevskaya theorem]] or (if the coefficients of {{mvar|L}} are constant) by quadrature. So, if the delta function can be decomposed into plane waves, then one can in principle solve linear partial differential equations. Such a decomposition of the delta function into plane waves was part of a general technique first introduced essentially by [[Johann Radon]], and then developed in this form by [[Fritz John]] ([[#CITEREFJohn1955|1955]]).{{sfn|Courant|Hilbert|1962|loc=§14}} Choose {{mvar|k}} so that {{math|''n'' + ''k''}} is an even integer, and for a real number {{mvar|s}}, put <math display="block">g(s) = \operatorname{Re}\left[\frac{-s^k\log(-is)}{k!(2\pi i)^n}\right] =\begin{cases} \frac{|s|^k}{4k!(2\pi i)^{n-1}} &n \text{ odd}\\[5pt] -\frac{|s|^k\log|s|}{k!(2\pi i)^n}&n \text{ even.} \end{cases}</math> Then {{mvar|δ}} is obtained by applying a power of the [[Laplacian]] to the integral with respect to the unit [[sphere measure]] {{mvar|dω}} of {{math|''g''(''x'' · ''ξ'')}} for {{mvar|ξ}} in the [[unit sphere]] {{math|''S''<sup>''n''−1</sup>}}: <math display="block">\delta(x) = \Delta_x^{(n+k)/2} \int_{S^{n-1}} g(x\cdot\xi)\,d\omega_\xi.</math> The Laplacian here is interpreted as a weak derivative, so that this equation is taken to mean that, for any test function {{mvar|φ}}, <math display="block">\varphi(x) = \int_{\mathbf{R}^n}\varphi(y)\,dy\,\Delta_x^{\frac{n+k}{2}} \int_{S^{n-1}} g((x-y)\cdot\xi)\,d\omega_\xi.</math> The result follows from the formula for the [[Newtonian potential]] (the fundamental solution of Poisson's equation). This is essentially a form of the inversion formula for the [[Radon transform]] because it recovers the value of {{math|''φ''(''x'')}} from its integrals over hyperplanes. For instance, if {{mvar|n}} is odd and {{math|1=''k'' = 1}}, then the integral on the right hand side is <math display="block"> \begin{align} & c_n \Delta^{\frac{n+1}{2}}_x\iint_{S^{n-1}} \varphi(y)|(y-x) \cdot \xi| \, d\omega_\xi \, dy \\[5pt] & \qquad = c_n \Delta^{(n+1)/2}_x \int_{S^{n-1}} \, d\omega_\xi \int_{-\infty}^\infty |p| R\varphi(\xi,p+x\cdot\xi)\,dp \end{align} </math> where {{math|''Rφ''(''ξ'', ''p'')}} is the Radon transform of {{mvar|φ}}: <math display="block">R\varphi(\xi,p) = \int_{x\cdot\xi=p} f(x)\,d^{n-1}x.</math> An alternative equivalent expression of the plane wave decomposition is:{{sfn|Gelfand|Shilov|1966–1968|loc=I, §3.10}} <math display="block">\delta(x) = \begin{cases} \frac{(n-1)!}{(2\pi i)^n}\displaystyle\int_{S^{n-1}}(x\cdot\xi)^{-n} \, d\omega_\xi & n\text{ even} \\ \frac{1}{2(2\pi i)^{n-1}}\displaystyle\int_{S^{n-1}}\delta^{(n-1)}(x\cdot\xi)\,d\omega_\xi & n\text{ odd}. \end{cases}</math> ===Fourier transform=== The delta function is a [[Distribution (mathematics)#Tempered distributions and Fourier transform|tempered distribution]], and therefore it has a well-defined [[Fourier transform]]. Formally, one finds<ref>The numerical factors depend on the [[Fourier transform#Other conventions|conventions]] for the Fourier transform.</ref> <math display="block">\widehat{\delta}(\xi)=\int_{-\infty}^\infty e^{-2\pi i x \xi} \,\delta(x)dx = 1.</math> Properly speaking, the Fourier transform of a distribution is defined by imposing [[self-adjoint]]ness of the Fourier transform under the [[Dual_system|duality pairing]] <math>\langle\cdot,\cdot\rangle</math> of tempered distributions with [[Schwartz functions]]. Thus <math>\widehat{\delta}</math> is defined as the unique tempered distribution satisfying <math display="block">\langle\widehat{\delta},\varphi\rangle = \langle\delta,\widehat{\varphi}\rangle</math> for all Schwartz functions {{mvar|φ}}. And indeed it follows from this that <math>\widehat{\delta}=1.</math> As a result of this identity, the [[convolution]] of the delta function with any other tempered distribution {{mvar|S}} is simply {{mvar|S}}: <math display="block">S*\delta = S.</math> That is to say that {{mvar|δ}} is an [[identity element]] for the convolution on tempered distributions, and in fact, the space of compactly supported distributions under convolution is an [[associative algebra]] with identity the delta function. This property is fundamental in [[signal processing]], as convolution with a tempered distribution is a [[linear time-invariant system]], and applying the linear time-invariant system measures its [[impulse response]]. The impulse response can be computed to any desired degree of accuracy by choosing a suitable approximation for {{mvar|δ}}, and once it is known, it characterizes the system completely. See {{section link | LTI system theory |Impulse response and convolution}}. The inverse Fourier transform of the tempered distribution {{math|1=''f''(''ξ'') = 1}} is the delta function. Formally, this is expressed as <math display="block">\int_{-\infty}^\infty 1 \cdot e^{2\pi i x\xi}\,d\xi = \delta(x)</math> and more rigorously, it follows since <math display="block">\langle 1, \widehat{f}\rangle = f(0) = \langle\delta,f\rangle</math> for all Schwartz functions {{mvar|''f''}}. In these terms, the delta function provides a suggestive statement of the orthogonality property of the Fourier kernel on {{math|'''R'''}}. Formally, one has <math display="block">\int_{-\infty}^\infty e^{i 2\pi \xi_1 t} \left[e^{i 2\pi \xi_2 t}\right]^*\,dt = \int_{-\infty}^\infty e^{-i 2\pi (\xi_2 - \xi_1) t} \,dt = \delta(\xi_2 - \xi_1).</math> This is, of course, shorthand for the assertion that the Fourier transform of the tempered distribution <math display="block">f(t) = e^{i2\pi\xi_1 t}</math> is <math display="block">\widehat{f}(\xi_2) = \delta(\xi_1-\xi_2)</math> which again follows by imposing self-adjointness of the Fourier transform. By [[analytic continuation]] of the Fourier transform, the [[Laplace transform]] of the delta function is found to be{{sfn|Bracewell|1986}} <math display="block"> \int_{0}^{\infty}\delta(t-a)\,e^{-st} \, dt=e^{-sa}.</math> ====Fourier kernels==== {{See also|Convergence of Fourier series}} In the study of [[Fourier series]], a major question consists of determining whether and in what sense the Fourier series associated with a [[periodic function]] converges to the function. The {{mvar|n}}-th partial sum of the Fourier series of a function {{mvar|f}} of period {{math|2π}} is defined by convolution (on the interval {{closed-closed|−π,π}}) with the [[Dirichlet kernel]]: <math display="block">D_N(x) = \sum_{n=-N}^N e^{inx} = \frac{\sin\left(\left(N+\frac12\right)x\right)}{\sin(x/2)}.</math> Thus, <math display="block">s_N(f)(x) = D_N*f(x) = \sum_{n=-N}^N a_n e^{inx}</math> where <math display="block">a_n = \frac{1}{2\pi}\int_{-\pi}^\pi f(y)e^{-iny}\,dy.</math> A fundamental result of elementary Fourier series states that the Dirichlet kernel restricted to the interval {{closed-closed|−π,π}} tends to a multiple of the delta function as {{math|''N'' → ∞}}. This is interpreted in the distribution sense, that <math display="block">s_N(f)(0) = \int_{-\pi}^{\pi} D_N(x)f(x)\,dx \to 2\pi f(0)</math> for every compactly supported {{em|smooth}} function {{mvar|f}}. Thus, formally one has <math display="block">\delta(x) = \frac1{2\pi} \sum_{n=-\infty}^\infty e^{inx}</math> on the interval {{closed-closed|−π,π}}. Despite this, the result does not hold for all compactly supported {{em|continuous}} functions: that is {{math|''D<sub>N</sub>''}} does not converge weakly in the sense of measures. The lack of convergence of the Fourier series has led to the introduction of a variety of [[summability methods]] to produce convergence. The method of [[Cesàro summation]] leads to the [[Fejér kernel]]{{sfn|Lang|1997|p=312}} <math display="block">F_N(x) = \frac1N\sum_{n=0}^{N-1} D_n(x) = \frac{1}{N}\left(\frac{\sin \frac{Nx}{2}}{\sin \frac{x}{2}}\right)^2.</math> The [[Fejér kernel]]s tend to the delta function in a stronger sense that<ref>In the terminology of {{harvtxt|Lang|1997}}, the Fejér kernel is a Dirac sequence, whereas the Dirichlet kernel is not.</ref> <math display="block">\int_{-\pi}^{\pi} F_N(x)f(x)\,dx \to 2\pi f(0)</math> for every compactly supported {{em|continuous}} function {{mvar|f}}. The implication is that the Fourier series of any continuous function is Cesàro summable to the value of the function at every point. ===Hilbert space theory=== The Dirac delta distribution is a [[densely defined]] [[unbounded operator|unbounded]] [[linear functional]] on the [[Hilbert space]] [[Lp space|L<sup>2</sup>]] of [[square-integrable function]]s. Indeed, smooth compactly supported functions are [[dense set|dense]] in {{math|''L''<sup>2</sup>}}, and the action of the delta distribution on such functions is well-defined. In many applications, it is possible to identify subspaces of {{math|''L''<sup>2</sup>}} and to give a stronger [[topology]] on which the delta function defines a [[bounded linear functional]]. ====Sobolev spaces==== The [[Sobolev embedding theorem]] for [[Sobolev space]]s on the real line {{math|'''R'''}} implies that any square-integrable function {{mvar|f}} such that <math display="block">\|f\|_{H^1}^2 = \int_{-\infty}^\infty |\widehat{f}(\xi)|^2 (1+|\xi|^2)\,d\xi < \infty</math> is automatically continuous, and satisfies in particular <math display="block">\delta[f]=|f(0)| < C \|f\|_{H^1}.</math> Thus {{mvar|δ}} is a bounded linear functional on the Sobolev space {{math|''H''<sup>1</sup>}}. Equivalently {{mvar|δ}} is an element of the [[continuous dual space]] {{math|''H''<sup>−1</sup>}} of {{math|''H''<sup>1</sup>}}. More generally, in {{mvar|n}} dimensions, one has {{math|''δ'' ∈ ''H''<sup>−''s''</sup>('''R'''<sup>''n''</sup>)}} provided {{math|''s'' > {{sfrac|''n''|2}}}}. ====Spaces of holomorphic functions==== In [[complex analysis]], the delta function enters via [[Cauchy's integral formula]], which asserts that if {{mvar|D}} is a domain in the [[complex plane]] with smooth boundary, then <math display="block">f(z) = \frac{1}{2\pi i} \oint_{\partial D} \frac{f(\zeta)\,d\zeta}{\zeta-z},\quad z\in D</math> for all [[holomorphic function]]s {{mvar|f}} in {{mvar|D}} that are continuous on the closure of {{mvar|D}}. As a result, the delta function {{math|''δ''<sub>''z''</sub>}} is represented in this class of holomorphic functions by the Cauchy integral: <math display="block">\delta_z[f] = f(z) = \frac{1}{2\pi i} \oint_{\partial D} \frac{f(\zeta)\,d\zeta}{\zeta-z}.</math> Moreover, let {{math|''H''<sup>2</sup>(∂''D'')}} be the [[Hardy space]] consisting of the closure in {{math|''L''<sup>2</sup>(∂''D'')}} of all holomorphic functions in {{mvar|D}} continuous up to the boundary of {{mvar|D}}. Then functions in {{math|''H''<sup>2</sup>(∂''D'')}} uniquely extend to holomorphic functions in {{mvar|D}}, and the Cauchy integral formula continues to hold. In particular for {{math|''z'' ∈ ''D''}}, the delta function {{mvar|δ<sub>z</sub>}} is a continuous linear functional on {{math|''H''<sup>2</sup>(∂''D'')}}. This is a special case of the situation in [[several complex variables]] in which, for smooth domains {{mvar|D}}, the [[Szegő kernel]] plays the role of the Cauchy integral.{{sfn|Hazewinkel|1995|p=[{{google books |plainurl=y |id=PE1a-EIG22kC|page=357}} 357]}} Another representation of the delta function in a space of holomorphic functions is on the space <math>H(D)\cap L^2(D)</math> of square-integrable holomorphic functions in an open set <math>D\subset\mathbb C^n</math>. This is a closed subspace of <math>L^2(D)</math>, and therefore is a Hilbert space. On the other hand, the functional that evaluates a holomorphic function in <math>H(D)\cap L^2(D)</math> at a point <math>z</math> of <math>D</math> is a continuous functional, and so by the Riesz representation theorem, is represented by integration against a kernel <math>K_z(\zeta)</math>, the [[Bergman kernel]]. This kernel is the analog of the delta function in this Hilbert space. A Hilbert space having such a kernel is called a [[reproducing kernel Hilbert space]]. In the special case of the unit disc, one has <math display="block">\delta_w[f] = f(w) = \frac1\pi\iint_{|z|<1} \frac{f(z)\,dx\,dy}{(1-\bar zw)^2}.</math> ====Resolutions of the identity==== Given a complete [[orthonormal basis]] set of functions {{math|{{brace|''φ''<sub>''n''</sub>}}}} in a separable Hilbert space, for example, the normalized [[eigenvector]]s of a [[Compact operator on Hilbert space#Spectral theorem|compact self-adjoint operator]], any vector {{mvar|f}} can be expressed as <math display="block">f = \sum_{n=1}^\infty \alpha_n \varphi_n. </math> The coefficients {α<sub>n</sub>} are found as <math display="block">\alpha_n = \langle \varphi_n, f \rangle,</math> which may be represented by the notation: <math display="block">\alpha_n = \varphi_n^\dagger f, </math> a form of the [[bra–ket notation]] of Dirac.<ref> The development of this section in bra–ket notation is found in {{harv|Levin|2002|loc= Coordinate-space wave functions and completeness, pp.=109''ff''}}</ref> Adopting this notation, the expansion of {{mvar|f}} takes the [[Dyadic tensor|dyadic]] form:{{sfn|Davis|Thomson|2000|loc=Perfect operators, p.344}} <math display="block">f = \sum_{n=1}^\infty \varphi_n \left ( \varphi_n^\dagger f \right). </math> Letting {{mvar|I}} denote the [[identity operator]] on the Hilbert space, the expression <math display="block">I = \sum_{n=1}^\infty \varphi_n \varphi_n^\dagger, </math> is called a [[Borel functional calculus#Resolution of the identity|resolution of the identity]]. When the Hilbert space is the space {{math|''L''<sup>2</sup>(''D'')}} of square-integrable functions on a domain {{mvar|D}}, the quantity: <math display="block">\varphi_n \varphi_n^\dagger, </math> is an integral operator, and the expression for {{mvar|f}} can be rewritten <math display="block">f(x) = \sum_{n=1}^\infty \int_D\, \left( \varphi_n (x) \varphi_n^*(\xi)\right) f(\xi) \, d \xi.</math> The right-hand side converges to {{mvar|f}} in the {{math|''L''<sup>2</sup>}} sense. It need not hold in a pointwise sense, even when {{mvar|f}} is a continuous function. Nevertheless, it is common to abuse notation and write <math display="block">f(x) = \int \, \delta(x-\xi) f (\xi)\, d\xi, </math> resulting in the representation of the delta function:{{sfn|Davis|Thomson|2000|loc=Equation 8.9.11, p. 344}} <math display="block">\delta(x-\xi) = \sum_{n=1}^\infty \varphi_n (x) \varphi_n^*(\xi). </math> With a suitable [[rigged Hilbert space]] {{math|(Φ, ''L''<sup>2</sup>(''D''), Φ*)}} where {{math|Φ ⊂ ''L''<sup>2</sup>(''D'')}} contains all compactly supported smooth functions, this summation may converge in {{math|Φ*}}, depending on the properties of the basis {{math|''φ''<sub>''n''</sub>}}. In most cases of practical interest, the orthonormal basis comes from an integral or differential operator (e.g. the [[heat kernel]]), in which case the series converges in the [[Distribution (mathematics)#Distributions|distribution]] sense.{{sfn|de la Madrid|Bohm|Gadella|2002}} ===Infinitesimal delta functions=== [[Cauchy]] used an infinitesimal {{mvar|α}} to write down a unit impulse, infinitely tall and narrow Dirac-type delta function {{mvar|δ<sub>α</sub>}} satisfying <math display="inline">\int F(x)\delta_\alpha(x) \,dx = F(0)</math> in a number of articles in 1827.{{sfn|Laugwitz|1989}} Cauchy defined an infinitesimal in ''[[Cours d'Analyse]]'' (1827) in terms of a sequence tending to zero. Namely, such a null sequence becomes an infinitesimal in Cauchy's and [[Lazare Carnot]]'s terminology. [[Non-standard analysis]] allows one to rigorously treat infinitesimals. The article by {{harvtxt|Yamashita|2007}} contains a bibliography on modern Dirac delta functions in the context of an infinitesimal-enriched continuum provided by the [[hyperreal number|hyperreals]]. Here the Dirac delta can be given by an actual function, having the property that for every real function {{mvar|F}} one has <math display="inline">\int F(x)\delta_\alpha(x) \, dx = F(0)</math> as anticipated by Fourier and Cauchy.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)