Moment problem

Revision as of 21:10, 14 April 2025 by imported>JJMC89 bot III (Moving Category:Moment (mathematics) to Category:Moments (mathematics) per Wikipedia:Categories for discussion/Speedy)
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Template:Use American English Template:Short description

File:Standard deviation diagram.svg
Example: Given the mean and variance <math>\sigma^2</math> (as well as all further cumulants equal 0) the normal distribution is the distribution solving the moment problem.

In mathematics, a moment problem arises as the result of trying to invert the mapping that takes a measure <math>\mu</math> to the sequence of moments

<math>m_n = \int_{-\infty}^\infty x^n \,d\mu(x)\,.</math>

More generally, one may consider

<math>m_n = \int_{-\infty}^\infty M_n(x) \,d\mu(x)\,.</math>

for an arbitrary sequence of functions <math>M_n</math>.

IntroductionEdit

In the classical setting, <math>\mu</math> is a measure on the real line, and <math>M</math> is the sequence <math>\{x^n : n=1,2,\dotsc\}</math>. In this form the question appears in probability theory, asking whether there is a probability measure having specified mean, variance and so on, and whether it is unique.

There are three named classical moment problems: the Hamburger moment problem in which the support of <math>\mu</math> is allowed to be the whole real line; the Stieltjes moment problem, for <math>[0,\infty)</math>; and the Hausdorff moment problem for a bounded interval, which without loss of generality may be taken as <math>[0,1]</math>.

The moment problem also extends to complex analysis as the trigonometric moment problem in which the Hankel matrices are replaced by Toeplitz matrices and the support of Template:Math is the complex unit circle instead of the real line.Template:Sfn

ExistenceEdit

A sequence of numbers <math>m_n</math> is the sequence of moments of a measure <math>\mu</math> if and only if a certain positivity condition is fulfilled; namely, the Hankel matrices <math>H_n</math>,

<math>(H_n)_{ij} = m_{i+j}\,,</math>

should be positive semi-definite. This is because a positive-semidefinite Hankel matrix corresponds to a linear functional <math> \Lambda</math> such that <math>\Lambda(x^n) = m_n</math> and <math> \Lambda(f^2) \geq 0 </math> (non-negative for sum of squares of polynomials). Assume <math> \Lambda</math> can be extended to <math> \mathbb{R}[x]^*</math>. In the univariate case, a non-negative polynomial can always be written as a sum of squares. So the linear functional <math> \Lambda</math> is positive for all the non-negative polynomials in the univariate case. By Haviland's theorem, the linear functional has a measure form, that is <math> \Lambda(x^n) = \int_{-\infty}^{\infty} x^n d \mu</math>. A condition of similar form is necessary and sufficient for the existence of a measure <math>\mu</math> supported on a given interval <math>[a,b]</math>.

One way to prove these results is to consider the linear functional <math>\varphi</math> that sends a polynomial

<math>P(x) = \sum_k a_k x^k </math>

to

<math>\sum_k a_k m_k.</math>

If <math>m_k</math> are the moments of some measure <math>\mu</math> supported on <math>[a,b]</math>, then evidently

Template:NumBlk

Vice versa, if (Template:EquationNote) holds, one can apply the M. Riesz extension theorem and extend <math>\varphi</math> to a functional on the space of continuous functions with compact support <math>C_c([a,b])</math>), so that

Template:NumBlk

By the Riesz representation theorem, (Template:EquationNote) holds iff there exists a measure <math>\mu</math> supported on <math>[a,b]</math>, such that

<math> \varphi(f) = \int f \, d\mu</math>

for every <math>f \in C_c([a,b])</math>.

Thus the existence of the measure <math>\mu</math> is equivalent to (Template:EquationNote). Using a representation theorem for positive polynomials on <math>[a,b]</math>, one can reformulate (Template:EquationNote) as a condition on Hankel matrices.Template:SfnTemplate:Sfn

Uniqueness (or determinacy)Edit

Template:See also The uniqueness of <math>\mu</math> in the Hausdorff moment problem follows from the Weierstrass approximation theorem, which states that polynomials are dense under the uniform norm in the space of continuous functions on <math>[0,1]</math>. For the problem on an infinite interval, uniqueness is a more delicate question.Template:Sfn There are distributions, such as log-normal distributions, which have finite moments for all the positive integers but where other distributions have the same moments.

Formal solutionEdit

When the solution exists, it can be formally written using derivatives of the Dirac delta function as

<math> d\mu(x) = \rho(x) dx, \quad \rho(x) = \sum_{n=0}^\infty \frac{(-1)^n}{n!}\delta^{(n)}(x)m_n

</math>. The expression can be derived by taking the inverse Fourier transform of its characteristic function.

VariationsEdit

Template:See also An important variation is the truncated moment problem, which studies the properties of measures with fixed first Template:Math moments (for a finite Template:Math). Results on the truncated moment problem have numerous applications to extremal problems, optimisation and limit theorems in probability theory.Template:Sfn

ProbabilityEdit

The moment problem has applications to probability theory. The following is commonly used:<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>

Template:Math theorem

By checking Carleman's condition, we know that the standard normal distribution is a determinate measure, thus we have the following form of the central limit theorem:

Template:Math theorem

See alsoEdit

NotesEdit

Template:Reflist

ReferencesEdit