Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Distribution (mathematics)
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Notation== The following notation will be used throughout this article: * <math>n</math> is a fixed positive integer and <math>U</math> is a fixed non-empty [[Open set|open subset]] of [[Euclidean space]] <math>\R^n.</math> * <math>\N_0 = \{0, 1, 2, \ldots\}</math> denotes the [[natural number]]s. * <math>k</math> will denote a non-negative integer or <math>\infty.</math> * If <math>f</math> is a [[Function (mathematics)|function]] then <math>\operatorname{Dom}(f)</math> will denote its [[Domain of a function|domain]] and the '''{{em|[[Support (mathematics)|{{visible anchor|support of a function|text=support}}]]}}''' of <math>f,</math> denoted by <math>\operatorname{supp}(f),</math> is defined to be the [[Closure (topology)|closure]] of the set <math>\{x \in \operatorname{Dom}(f): f(x) \neq 0\}</math> in <math>\operatorname{Dom}(f).</math> * For two functions <math>f, g : U \to \Complex,</math> the following notation defines a canonical [[Dual system|pairing]]: <math display=block>\langle f, g\rangle := \int_U f(x) g(x) \,dx.</math> * A '''{{em|[[Multi-index notation|multi-index]]}} of size''' <math>n</math> is an element in <math>\N^n</math> (given that <math>n</math> is fixed, if the size of multi-indices is omitted then the size should be assumed to be <math>n</math>). The '''{{em|length}}''' of a multi-index <math>\alpha = (\alpha_1, \ldots, \alpha_n) \in \N^n</math> is defined as <math>\alpha_1+\cdots+\alpha_n</math> and denoted by <math>|\alpha|.</math> Multi-indices are particularly useful when dealing with functions of several variables, in particular, we introduce the following notations for a given multi-index <math>\alpha = (\alpha_1, \ldots, \alpha_n) \in \N^n</math>: <math display=block>\begin{align} x^\alpha &= x_1^{\alpha_1} \cdots x_n^{\alpha_n} \\ \partial^\alpha &= \frac{\partial^{|\alpha|}}{\partial x_1^{\alpha_1}\cdots \partial x_n^{\alpha_n}} \end{align}</math> We also introduce a partial order of all multi-indices by <math>\beta \ge \alpha</math> if and only if <math>\beta_i \ge \alpha_i</math> for all <math>1 \le i\le n.</math> When <math>\beta \ge \alpha</math> we define their multi-index binomial coefficient as: <math display=block>\binom{\beta}{\alpha} := \binom{\beta_1}{\alpha_1} \cdots \binom{\beta_n}{\alpha_n}.</math> <!-- ==Basic idea== [[File:Mollifier Illustration.svg|right|thumb|280px|A typical test function, the [[bump function]] <math>\Psi(x).</math> It is [[smooth function|smooth]] (infinitely differentiable) and has [[compact support]] (is zero outside an interval, in this case the interval <math>[-1, 1]</math>).]] Distributions are a class of [[linear functional]]s that map a set of {{em|test functions}} (conventional and [[well-behaved]] functions) into the set of real or complex numbers. In the simplest case, the set of test functions considered is <math>\mathcal{D}(\R),</math> which is the set of functions <math>\varphi:\R\to\R</math> having two properties: * <math>\varphi</math> is [[Smooth function|smooth]] (infinitely differentiable); * <math>\varphi</math> has [[compact support]] (is identically zero outside some bounded interval). A distribution {{mvar|T}} is a continuous linear mapping <math>T:\mathcal{D}(\R)\to\R.</math> Instead of writing <math>T(\varphi),</math> it is conventional to write <math>\langle T, \varphi \rangle</math> for the value of <math>T</math> acting on a test function <math>\varphi.</math> A simple example of a distribution is the [[Dirac delta]], <math>\delta,</math> defined by <math display=block>\langle \delta, \varphi \rangle = \varphi(0),</math> meaning that <math>\delta</math> evaluates a test function at {{math|0}}. Its physical interpretation is as the density of a point source. As described next, there are straightforward mappings from both [[locally integrable function]]s and [[Radon measure]]s to corresponding distributions, but not all distributions can be formed in this manner. ===Functions and measures as distributions=== Suppose <math>f : \R \to \R</math> is a [[locally integrable function]]. Then a corresponding distribution, denoted by <math>T_f,</math> may be defined by <math display=block>\langle T_f, \varphi \rangle = \int_\R f(x) \varphi(x) \,dx \qquad \text{for } \varphi \in \mathcal{D}(\R).</math> This integral is a [[real number]] which depends [[Linear operator|linearly]] and [[Continuous function|continuously]] on <math>\varphi.</math> Conversely, the values of the distribution <math>T_f</math> on test functions in <math>\mathcal{D}(\R)</math> determine the pointwise almost everywhere values of the function <math>f</math> on <math>\R.</math> In a conventional [[abuse of notation]], <math>f</math> is often used to represent both the original function <math>f</math> and the corresponding distribution <math>T_f..</math> This example suggests the definition of a distribution as a linear and, in an appropriate sense, continuous [[Functional (mathematics)|functional]] on the space of test functions <math>\mathcal{D}(\R).</math> Similarly, if <math>\mu</math> is a [[Radon measure]] on <math>\R,</math> then a corresponding distribution, denoted by <math>R_{\mu},</math> may be defined by <math display=block>\left\langle R_\mu, \varphi \right\rangle = \int_\R \varphi\, d\mu \qquad \text{ for } \varphi \in \mathcal{D}(\R).</math> This integral also depends linearly and continuously on <math>\varphi,</math> so that <math>R_{\mu}</math> is a distribution. If <math>\mu</math> is [[absolute continuity|absolutely continuous]] with respect to Lebesgue measure with density <math>f</math> and <math>d \mu = f\,dx,</math> then this definition for <math>R_{\mu}</math> is the same as the previous one for <math>T_f,</math> but if <math>\mu</math> is not absolutely continuous, then <math>R_{\mu}</math> is a distribution that is not associated with a function. For example, if <math>P</math> is the point-mass measure on <math>\R</math> that assigns measure one to the singleton set <math>\{0\}</math> and measure zero to sets that do not contain zero, then <math display=block>\int_\R \varphi\, dP = \varphi(0),</math> so that <math>R_P = \delta</math> is the Dirac delta. ===Adding and multiplying distributions=== Distributions may be multiplied by real numbers and added together, so they form a real [[vector space]]. A distribution may also be multiplied by a rapidly decreasing infinitely differentiable function to get another distribution, but [[Distribution (mathematics)#Problem of multiplication|it is not possible to define a product of general distributions]] that extends the usual pointwise product of functions and has the same algebraic properties. This result was shown by {{harvtxt|Schwartz|1954}}, and is usually referred to as the {{em|Schwartz Impossibility Theorem}}. ===Derivatives of distributions=== It is desirable to choose a definition for the derivative of a distribution which, at least for distributions derived from smooth functions, has the property that <math>T'_f = T_{f'}</math> (i.e. <math>(T_f)' = T_{(f')}</math> where <math>f'</math> is the usual derivative of <math>f</math> and <math>(T_f)'</math> denotes the derivative of the distribution <math>T_f,</math> which we wish to define). If <math>\phi</math> is a test function, we can use [[integration by parts]] to see that <math display=block>\langle f', \phi \rangle = \int_\R f'\phi \,dx = \Big[ f(x) \phi(x) \Big]_{-\infty}^\infty -\int_\R f \phi' \,dx = -\langle f, \phi' \rangle</math> where the last equality follows from the fact that <math>\phi</math> has compact support, so is zero outside of a bounded set. This suggests that if {{mvar|T}} is a {{em|distribution}}, we should define its derivative <math>T'</math> by <math display=block>\langle T', \phi \rangle = - \langle T, \phi' \rangle.</math> It turns out that this is the proper definition; it extends the ordinary definition of derivative, every distribution becomes infinitely differentiable and the usual properties of derivatives hold. '''Example''': Recall that the [[Dirac delta]] (i.e. the so-called Dirac delta "function") is the distribution defined by the equation <math display=block>\langle \delta, \phi \rangle = \phi(0).</math> It is the derivative of the distribution corresponding to the [[Heaviside step function]] <math>H</math>: For any test function <math>\phi</math> <math display=block>\langle H', \phi \rangle = -\int_{-\infty}^\infty H(x) \phi'(x) \, dx = -\phi(\infty) + \phi(0) = \langle \delta, \phi \rangle,</math> so <math>H = \delta.</math> Note, <math>\phi(\infty)=0</math> because <math>\phi</math> has compact support by our definition of a test function. Similarly, the derivative of the Dirac delta is the distribution defined by the equation <math display=block>\langle \delta', \phi \rangle = - \phi'(0).</math> This latter distribution is an example of a distribution that is not derived from a function or a measure. Its physical interpretation is the density of a dipole source. Just as the Dirac impulse can be realized in the weak limit as a sequence of various kinds of constant norm bump functions of ever increasing amplitude and narrowing support, its derivative can by definition be realized as the weak limit of the negative derivatives of said functions, which are now antisymmetric about the eventual distribution's point of singular support.-->
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)