Template:Short description In probability theory and statistics, the cumulants Template:Mvar of a probability distribution are a set of quantities that provide an alternative to the moments of the distribution. Any two probability distributions whose moments are identical will have identical cumulants as well, and vice versa.

The first cumulant is the mean, the second cumulant is the variance, and the third cumulant is the same as the third central moment. But fourth and higher-order cumulants are not equal to central moments. In some cases theoretical treatments of problems in terms of cumulants are simpler than those using moments. In particular, when two or more random variables are statistically independent, the Template:Mathth-order cumulant of their sum is equal to the sum of their Template:Mathth-order cumulants. As well, the third and higher-order cumulants of a normal distribution are zero, and it is the only distribution with this property.

Just as for moments, where joint moments are used for collections of random variables, it is possible to define joint cumulants.

DefinitionEdit

The cumulants of a random variable Template:Mvar are defined using the cumulant-generating function Template:Math, which is the natural logarithm of the moment-generating function: <math display=block>K(t)=\log\operatorname{E}\left[e^{tX}\right].</math>

The cumulants Template:Mvar are obtained from a power series expansion of the cumulant generating function: <math display=block>K(t)=\sum_{n=1}^\infty \kappa_{n} \frac{t^{n}}{n!} =\kappa_1 \frac{t}{1!} + \kappa_2 \frac{t^2}{2!}+ \kappa_3 \frac{t^3}{3!}+ \cdots = \mu t + \sigma^2 \frac{t^2}{2} + \cdots.</math>

This expansion is a Maclaurin series, so the Template:Mvarth cumulant can be obtained by differentiating the above expansion Template:Mvar times and evaluating the result at zero:<ref>Weisstein, Eric W. "Cumulant". From MathWorld – A Wolfram Web Resource. http://mathworld.wolfram.com/Cumulant.html</ref> <math display=block>\kappa_{n} = K^{(n)}(0).</math>

If the moment-generating function does not exist, the cumulants can be defined in terms of the relationship between cumulants and moments discussed later.

Alternative definition of the cumulant generating functionEdit

Some writers<ref>Kendall, M. G., Stuart, A. (1969) The Advanced Theory of Statistics, Volume 1 (3rd Edition). Griffin, London. (Section 3.12)</ref><ref>Lukacs, E. (1970) Characteristic Functions (2nd Edition). Griffin, London. (Page 27)</ref> prefer to define the cumulant-generating function as the natural logarithm of the characteristic function, which is sometimes also called the second characteristic function,<ref>Lukacs, E. (1970) Characteristic Functions (2nd Edition). Griffin, London. (Section 2.4)</ref><ref>Aapo Hyvarinen, Juha Karhunen, and Erkki Oja (2001) Independent Component Analysis, John Wiley & Sons. (Section 2.7.2)</ref> <math display=block>H(t)=\log\operatorname{E} \left[e^{i t X}\right]=\sum_{n=1}^\infty \kappa_n \frac{(it)^n}{n!}=\mu it - \sigma^2 \frac{ t^2}{2} + \cdots</math>

An advantage of Template:Math — in some sense the function Template:Math evaluated for purely imaginary arguments — is that Template:Math is well defined for all real values of Template:Math even when Template:Math is not well defined for all real values of Template:Math, such as can occur when there is "too much" probability that Template:Math has a large magnitude. Although the function Template:Math will be well defined, it will nonetheless mimic Template:Math in terms of the length of its Maclaurin series, which may not extend beyond (or, rarely, even to) linear order in the argument Template:Math, and in particular the number of cumulants that are well defined will not change. Nevertheless, even when Template:Math does not have a long Maclaurin series, it can be used directly in analyzing and, particularly, adding random variables. Both the Cauchy distribution (also called the Lorentzian) and more generally, stable distributions (related to the Lévy distribution) are examples of distributions for which the power-series expansions of the generating functions have only finitely many well-defined terms.

Some basic propertiesEdit

The <math display=inline>n</math>th cumulant <math display=inline>\kappa_n(X)</math> of (the distribution of) a random variable <math display=inline>X</math> enjoys the following properties:

  • If <math display=inline>n>1</math> and <math display=inline>c</math> is constant (i.e. not random) then <math display=inline> \kappa_n(X+c) = \kappa_n(X),</math> i.e. the cumulant is translation invariant. (If <math display=inline> n=1</math> then we have <math display=inline> \kappa_1(X+c) = \kappa_1(X)+c.) </math>
  • If <math display=inline>c</math> is constant (i.e. not random) then <math display=inline> \kappa_n(cX) = c^n\kappa_n(X),</math> i.e. the <math display=inline>n</math>th cumulant is homogeneous of degree <math display=inline>n</math>.
  • If random variables <math display=inline>X_1,\ldots,X_m</math> are independent then <math display="block"> \kappa_n(X_1+\cdots+X_m) = \kappa_n(X_1) + \cdots + \kappa_n(X_m)\,. </math> That is, the cumulant is cumulative — hence the name.

The cumulative property follows quickly by considering the cumulant-generating function: <math display=block>\begin{align} K_{X_1+\cdots+X_m}(t) & =\log\operatorname{E} \left[e^{t(X_1+\cdots+X_m)}\right] \\[5pt] & = \log \left(\operatorname{E} \left[e^{tX_1}\right] \cdots \operatorname{E} \left[ e^{tX_m} \right] \right) \\[5pt] & = \log\operatorname{E}\left[e^{tX_1}\right] + \cdots + \log \operatorname{E} \left[ e^{tX_m} \right] \\[5pt] &= K_{X_1}(t) + \cdots + K_{X_m}(t), \end{align}</math> so that each cumulant of a sum of independent random variables is the sum of the corresponding cumulants of the addends. That is, when the addends are statistically independent, the mean of the sum is the sum of the means, the variance of the sum is the sum of the variances, the third cumulant (which happens to be the third central moment) of the sum is the sum of the third cumulants, and so on for each order of cumulant.

A distribution with given cumulants Template:Math can be approximated through an Edgeworth series.

First several cumulants as functions of the momentsEdit

All of the higher cumulants are polynomial functions of the central moments, with integer coefficients, but only in degrees 2 and 3 are the cumulants actually central moments.

  • <math display="inline"> \kappa_1(X) = \operatorname E[X] = {} </math>mean
  • <math display="inline"> \kappa_2(X) = \operatorname{var}(X) = \operatorname E\left[{\left(X-\operatorname E[X]\right)}^2\right] ={}</math>the variance, or second central moment.
  • <math display="inline"> \kappa_3(X) = \operatorname E\left[{\left(X - \operatorname E(X)\right)}^3\right] = {} </math>the third central moment.
  • <math display="inline"> \kappa_4(X) = \operatorname E\left[{\left(X - \operatorname E[X]\right)}^4\right] - 3\left( \operatorname E\left[{\left(X-\operatorname E[X]\right)}^2\right] \right)^2={} </math>the fourth central moment minus three times the square of the second central moment. Thus this is the first case in which cumulants are not simply moments or central moments. The central moments of degree more than 3 lack the cumulative property.
  • <math display="inline"> \kappa_5(X) = \operatorname E\left[{\left(X - \operatorname E[X]\right)}^5\right] - 10 \operatorname E\left[ {\left(X - \operatorname E[X]\right)}^3\right] \operatorname E\left[{\left(X - \operatorname E[X]\right)}^2\right].</math>

Cumulants of some discrete probability distributionsEdit

Introducing the variance-to-mean ratio <math display=block>\varepsilon=\mu^{-1}\sigma^2=\kappa_1^{-1}\kappa_2,</math> the above probability distributions get a unified formula for the derivative of the cumulant generating function:Template:Citation needed <math display=block>K'(t)=(1+(e^{-t}-1)\varepsilon)^{-1}\mu</math>

The second derivative is <math display=block>K(t)=(\varepsilon-(\varepsilon-1)e^t)^{-2}\mu\varepsilon e^t</math> confirming that the first cumulant is Template:Math and the second cumulant is Template:Math.

The constant random variables Template:Math have Template:Math.

The binomial distributions have Template:Math so that Template:Math.

The Poisson distributions have Template:Math.

The negative binomial distributions have Template:Math so that Template:Math.

Note the analogy to the classification of conic sections by eccentricity: circles Template:Math, ellipses Template:Math, parabolas Template:Math, hyperbolas Template:Math.

Cumulants of some continuous probability distributionsEdit

Some properties of the cumulant generating functionEdit

The cumulant generating function Template:Math, if it exists, is infinitely differentiable and convex, and passes through the origin. Its first derivative ranges monotonically in the open interval from the infimum to the supremum of the support of the probability distribution, and its second derivative is strictly positive everywhere it is defined, except for the degenerate distribution of a single point mass. The cumulant-generating function exists if and only if the tails of the distribution are majorized by an exponential decay, that is, (see Big O notation) <math display=block> \begin{align} & \exists c>0,\,\, F(x)=O(e^{cx}), x\to-\infty; \text{ and} \\[4pt] & \exists d>0,\,\, 1-F(x)=O(e^{-dx}),x\to+\infty; \end{align} </math> where <math display=inline>F</math> is the cumulative distribution function. The cumulant-generating function will have vertical asymptote(s) at the negative supremum of such Template:Math, if such a supremum exists, and at the supremum of such Template:Math, if such a supremum exists, otherwise it will be defined for all real numbers.

If the support of a random variable Template:Math has finite upper or lower bounds, then its cumulant-generating function Template:Math, if it exists, approaches asymptote(s) whose slope is equal to the supremum or infimum of the support, <math display=block> \begin{align} y & =(t+1)\inf \operatorname{supp} X-\mu(X), \text{ and} \\[5pt] y & =(t-1)\sup \operatorname{supp}X+\mu(X), \end{align} </math> respectively, lying above both these lines everywhere. (The integrals <math display=block>\int_{-\infty}^0 \left[t\inf \operatorname{supp}X-K'(t)\right]\,dt, \qquad \int_{\infty}^0 \left[t\inf \operatorname{supp}X-K'(t) \right]\,dt</math> yield the [[y-intercept|Template:Math-intercepts]] of these asymptotes, since Template:Math.)

For a shift of the distribution by Template:Math, <math display=inline>K_{X+c}(t)=K_X(t)+ct.</math> For a degenerate point mass at Template:Math, the cumulant generating function is the straight line <math display=inline>K_c(t)=ct</math>, and more generally, <math display=inline>K_{X+Y}=K_X+K_Y</math> if and only if Template:Math and Template:Math are independent and their cumulant generating functions exist; (subindependence and the existence of second moments sufficing to imply independence.<ref> Template:Cite journal</ref>)

The natural exponential family of a distribution may be realized by shifting or translating Template:Math, and adjusting it vertically so that it always passes through the origin: if Template:Math is the pdf with cumulant generating function <math display=inline>K(t)=\log M(t),</math> and <math display=inline>f|\theta</math> is its natural exponential family, then <math display=inline>f(x\mid\theta)=\frac1{M(\theta)}e^{\theta x} f(x),</math> and <math display=inline>K(t\mid\theta)=K(t+\theta)-K(\theta).</math>

If Template:Math is finite for a range Template:Math then if Template:Math then Template:Math is analytic and infinitely differentiable for Template:Math. Moreover for Template:Math real and Template:Math is strictly convex, and Template:Math is strictly increasing. Template:Citation needed

Further properties of cumulantsEdit

A negative resultEdit

Given the results for the cumulants of the normal distribution, it might be hoped to find families of distributions for which Template:Math for some Template:Math, with the lower-order cumulants (orders 3 to Template:Math) being non-zero. There are no such distributions.<ref>Lukacs, E. (1970) Characteristic Functions (2nd Edition), Griffin, London. (Theorem 7.3.5)</ref> The underlying result here is that the cumulant generating function cannot be a finite-order polynomial of degree greater than 2.

Cumulants and momentsEdit

The moment generating function is given by: <math display=block>M(t) = 1+\sum_{n=1}^\infty \frac{\mu'_n t^n}{n!} = \exp \left(\sum_{n=1}^\infty \frac{\kappa_n t^n}{n!}\right) = \exp(K(t)).</math>

So the cumulant generating function is the logarithm of the moment generating function <math display=block>K(t) = \log M(t).</math>

The first cumulant is the expected value; the second and third cumulants are respectively the second and third central moments (the second central moment is the variance); but the higher cumulants are neither moments nor central moments, but rather more complicated polynomial functions of the moments.

The moments can be recovered in terms of cumulants by evaluating the Template:Mathth derivative of <math display=inline>\exp(K(t))</math> at Template:Tmath, <math display=block>\mu'_n = M^{(n)}(0) = \left. \frac{\mathrm{d}^n \exp (K(t))}{\mathrm{d}t^n}\right|_{t=0}. </math>

Likewise, the cumulants can be recovered in terms of moments by evaluating the Template:Mathth derivative of <math display=inline>\log M(t)</math> at Template:Tmath, <math display=block>\kappa_n = K^{(n)}(0) = \left. \frac{\mathrm{d}^n \log M(t)}{\mathrm{d}t^n} \right|_{t=0}.</math>

The explicit expression for the Template:Mathth moment in terms of the first Template:Math cumulants, and vice versa, can be obtained by using Faà di Bruno's formula for higher derivatives of composite functions. In general, we have <math display=block>\mu'_n = \sum_{k=1}^n B_{n,k}(\kappa_1,\ldots,\kappa_{n-k+1}) </math> <math display=block>\kappa_n = \sum_{k=1}^n (-1)^{k-1} (k-1)! B_{n,k}(\mu'_1, \ldots, \mu'_{n-k+1}),</math> where <math display=inline>B_{n,k}</math> are incomplete (or partial) Bell polynomials.

In the like manner, if the mean is given by <math display=inline>\mu</math>, the central moment generating function is given by <math display=block>C(t) = \operatorname{E}[e^{t(x-\mu)}] = e^{-\mu t} M(t) = \exp(K(t) - \mu t), </math> and the Template:Mathth central moment is obtained in terms of cumulants as <math display=block>\mu_n = C^{(n)}(0) = \left. \frac{\mathrm{d}^n}{\mathrm{d}t^n} \exp (K(t) - \mu t) \right|_{t=0} = \sum_{k=1}^n B_{n,k}(0,\kappa_2,\ldots,\kappa_{n-k+1}).</math>

Also, for Template:Math, the Template:Mathth cumulant in terms of the central moments is <math display=block> \begin{align} \kappa_n & = K^{(n)}(0) = \left. \frac{\mathrm{d}^n}{\mathrm{d}t^n} (\log C(t) + \mu t) \right|_{t=0} \\[4pt] & = \sum_{k=1}^n (-1)^{k-1} (k-1)! B_{n,k}(0,\mu_2,\ldots,\mu_{n-k+1}). \end{align} </math>

The Template:Mathth moment Template:Math is an Template:Mathth-degree polynomial in the first Template:Math cumulants. The first few expressions are:

<math display=block> \begin{align} \mu'_1 = {} & \kappa_1 \\[5pt] \mu'_2 = {} & \kappa_2+\kappa_1^2 \\[5pt] \mu'_3 = {} & \kappa_3+3\kappa_2\kappa_1+\kappa_1^3 \\[5pt] \mu'_4 = {} & \kappa_4 + 4\kappa_3\kappa_1 + 3\kappa_2^2 + 6\kappa_2\kappa_1^2 + \kappa_1^4 \\[5pt] \mu'_5 = {} & \kappa_5+5\kappa_4\kappa_1+10\kappa_3\kappa_2 + 10\kappa_3\kappa_1^2 + 15\kappa_2^2\kappa_1 + 10\kappa_2\kappa_1^3 + \kappa_1^5 \\[5pt] \mu'_6 = {} & \kappa_6 + 6\kappa_5\kappa_1 + 15\kappa_4\kappa_2 + 15\kappa_4\kappa_1^2 + 10\kappa_3^2 + 60\kappa_3\kappa_2\kappa_1 + 20\kappa_3\kappa_1^3 \\ & {} + 15\kappa_2^3 + 45\kappa_2^2\kappa_1^2 + 15\kappa_2\kappa_1^4 + \kappa_1^6. \end{align} </math>

The "prime" distinguishes the moments Template:Math from the central moments Template:Math. To express the central moments as functions of the cumulants, just drop from these polynomials all terms in which Template:Math appears as a factor: <math display=block> \begin{align} \mu_1 & =0 \\[4pt] \mu_2 & =\kappa_2 \\[4pt] \mu_3 & =\kappa_3 \\[4pt] \mu_4 & =\kappa_4+3\kappa_2^2 \\[4pt] \mu_5 & =\kappa_5+10\kappa_3\kappa_2 \\[4pt] \mu_6 & =\kappa_6+15\kappa_4\kappa_2+10\kappa_3^2+15\kappa_2^3. \end{align} </math>

Similarly, the Template:Mathth cumulant Template:Math is an Template:Mathth-degree polynomial in the first Template:Math non-central moments. The first few expressions are: <math display=block> \begin{align} \kappa_1 = {} & \mu'_1 \\[4pt] \kappa_2 = {} & \mu'_2-{\mu'_1}^2 \\[4pt] \kappa_3 = {} & \mu'_3-3\mu'_2\mu'_1+2{\mu'_1}^3 \\[4pt] \kappa_4 = {} & \mu'_4-4\mu'_3\mu'_1-3{\mu'_2}^2+12\mu'_2{\mu'_1}^2-6{\mu'_1}^4 \\[4pt] \kappa_5 = {} & \mu'_5-5\mu'_4\mu'_1-10\mu'_3\mu'_2 + 20\mu'_3{\mu'_1}^2 + 30{\mu'_2}^2\mu'_1-60\mu'_2{\mu'_1}^3 + 24{\mu'_1}^5 \\[4pt] \kappa_6 = {} & \mu'_6-6\mu'_5\mu'_1-15\mu'_4\mu'_2+30\mu'_4{\mu'_1}^2-10{\mu'_3}^2 + 120\mu'_3\mu'_2\mu'_1 \\ & {} - 120\mu'_3{\mu'_1}^3 + 30{\mu'_2}^3 - 270{\mu'_2}^2 {\mu'_1}^2+360\mu'_2{\mu'_1}^4-120{\mu'_1}^6\,. \end{align} </math>

In general,<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> the cumulant is the determinant of a matrix: <math display="block">\kappa_l = (-1)^{l+1} \left|\begin{array}{cccccccc} \mu'_1 & 1 & 0 & 0 & 0 & 0 & \ldots & 0 \\ \mu'_2 & \mu'_1 & 1 & 0 & 0 & 0 & \ldots & 0 \\ \mu'_3 & \mu'_2 & \left(\begin{array}{l} 2 \\ 1 \end{array}\right) \mu'_1 & 1 & 0 & 0 & \ldots & 0 \\ \mu'_4 & \mu'_3 & \left(\begin{array}{l} 3 \\ 1 \end{array}\right) \mu'_2 & \left(\begin{array}{l} 3 \\ 2 \end{array}\right) \mu'_1 & 1 & 0 & \ldots & 0 \\ \mu'_5 & \mu'_4 & \left(\begin{array}{l} 4 \\ 1 \end{array}\right) \mu'_3 & \left(\begin{array}{l} 4 \\ 2 \end{array}\right) \mu'_2 & \left(\begin{array}{c} 4 \\ 3 \end{array}\right) \mu'_1 & 1 & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \ddots & \vdots \\ \mu'_{l-1} & \mu'_{l-2} & \ldots & \ldots & \ldots & \ldots & \ddots & 1 \\ \mu'_l & \mu'_{l-1} & \ldots & \ldots & \ldots & \ldots & \ldots & \left(\begin{array}{l} l-1 \\ l-2 \end{array}\right) \mu'_1 \end{array}\right|</math>

To express the cumulants Template:Math for Template:Math as functions of the central moments, drop from these polynomials all terms in which μ'1 appears as a factor: <math display=block>\kappa_2=\mu_2\,</math> <math display=block>\kappa_3=\mu_3\,</math> <math display=block>\kappa_4=\mu_4-3{\mu_2}^2\,</math> <math display=block>\kappa_5=\mu_5-10\mu_3\mu_2\,</math> <math display=block>\kappa_6=\mu_6-15\mu_4\mu_2-10{\mu_3}^2+30{\mu_2}^3\,.</math>

The cumulants can be related to the moments by differentiating the relationship Template:Math with respect to Template:Math, giving Template:Math, which conveniently contains no exponentials or logarithms. Equating the coefficient of Template:Math on the left and right sides and using Template:Math gives the following formulas for Template:Math:<ref>Template:Cite journal</ref> <math display=block> \begin{align} \mu'_1 = {} & \kappa_1 \\[1pt] \mu'_2 = {} & \kappa_1\mu'_1+\kappa_2 \\[1pt] \mu'_3 = {} & \kappa_1\mu'_2+2\kappa_2\mu'_1+\kappa_3 \\[1pt] \mu'_4 = {} & \kappa_1\mu'_3+3\kappa_2\mu'_2+3\kappa_3\mu'_1+\kappa_4 \\[1pt] \mu'_5 = {} & \kappa_1\mu'_4+4\kappa_2\mu'_3+6\kappa_3\mu'_2+4\kappa_4\mu'_1+\kappa_5 \\[1pt] \mu'_6 = {} & \kappa_1\mu'_5+5\kappa_2\mu'_4+10\kappa_3\mu'_3+10\kappa_4\mu'_2+5\kappa_5\mu'_1+\kappa_6 \\[1pt] \mu'_n = {} & \sum_{m=1}^{n-1}{n-1 \choose m-1}\kappa_m \mu'_{n-m} + \kappa_n\,. \end{align} </math> These allow either <math display=inline>\kappa_n</math> or <math display=inline>\mu'_n</math> to be computed from the other using knowledge of the lower-order cumulants and moments. The corresponding formulas for the central moments <math display=inline>\mu_n</math> for <math display=inline>n \ge 2</math> are formed from these formulas by setting <math display=inline>\mu'_1 = \kappa_1 = 0</math> and replacing each <math display=inline>\mu'_n</math> with <math display=inline>\mu_n</math> for <math display=inline>n \ge 2</math>: <math display=block> \begin{align} \mu_2 = {} & \kappa_2 \\[1pt] \mu_3 = {} & \kappa_3 \\[1pt] \mu_n = {} & \sum_{m=2}^{n-2}{n-1 \choose m-1}\kappa_m \mu_{n-m} + \kappa_n\,. \end{align} </math>

Cumulants and set-partitionsEdit

These polynomials have a remarkable combinatorial interpretation: the coefficients count certain partitions of sets. A general form of these polynomials is <math display=block>\mu'_n=\sum_{\pi \, \in \, \Pi} \prod_{B \, \in \, \pi} \kappa_{|B|}</math> where

Thus each monomial is a constant times a product of cumulants in which the sum of the indices is Template:Math (e.g., in the term Template:Math, the sum of the indices is 3 + 2 + 2 + 1 = 8; this appears in the polynomial that expresses the 8th moment as a function of the first eight cumulants). A partition of the integer Template:Math corresponds to each term. The coefficient in each term is the number of partitions of a set of Template:Math members that collapse to that partition of the integer Template:Math when the members of the set become indistinguishable.

Cumulants and combinatoricsEdit

Further connection between cumulants and combinatorics can be found in the work of Gian-Carlo Rota, where links to invariant theory, symmetric functions, and binomial sequences are studied via umbral calculus.<ref>Template:Cite journal</ref>

Joint cumulantsEdit

The joint cumulant Template:Math of several random variables Template:Math is defined as the coefficient Template:Math in the Maclaurin series of the multivariate cumulant generating function, see Section 3.1 in,<ref name="link.springer.com">Template:Cite journal</ref> <math display="block">G(t_1,\dots,t_n)=\log \mathrm{E}(\mathrm{e}^{\sum_{j=1}^n t_j X_j}) =\sum_{k_1,\ldots,k_n} \kappa_{k_1,\ldots,k_n} \frac{t_1^{k_1} \cdots t_n^{k_n}}{k_1! \cdots k_n!} \,.</math> Note that <math display="block">\kappa_{k_1,\dots,k_n} = \left.\left(\frac{\mathrm{d}}{\mathrm{d} t_1}\right)^{k_1} \cdots \left(\frac{\mathrm{d}}{\mathrm{d} t_n}\right)^{k_n} G(t_1,\dots,t_n) \right|_{t_1 = \dots = t_n = 0}\,,</math> and, in particular <math display="block">\kappa(X_1,\ldots,X_n) = \left. \frac{\mathrm{d}^n}{\mathrm{d} t_1 \cdots \mathrm{d} t_n} G(t_1,\dots,t_n) \right|_{t_1 = \dots = t_n = 0}\,.</math> As with a single variable, the generating function and cumulant can instead be defined via <math display="block">H(t_1,\dots,t_n) =\log \mathrm{E}(\mathrm{e}^{\sum_{j=1}^n i t_j X_j}) =\sum_{k_1,\ldots,k_n} \kappa_{k_1,\ldots,k_n} i^{k_1+\cdots+k_n} \frac{t_1^{k_1} \cdots t_n^{k_n}}{k_1! \cdots k_n!}\,,</math> in which case <math display="block">\kappa_{k_1,\dots,k_n} = (-i)^{k_1+\cdots+k_n} \left.\left(\frac{\mathrm{d}}{\mathrm{d} t_1}\right)^{k_1} \cdots \left(\frac{\mathrm{d}}{\mathrm{d} t_n}\right)^{k_n} H(t_1,\dots,t_n) \right|_{t_1 = \dots = t_n = 0}\,,</math> and <math display="block">\kappa(X_1,\ldots,X_n) = \left. (-i)^{n} \frac{\mathrm{d}^n}{\mathrm{d} t_1 \cdots \mathrm{d} t_n} H(t_1,\dots,t_n) \right|_{t_1 = \dots = t_n = 0}\,.</math>

Repeated random variables and relation between the coefficients κk1, ..., knEdit

Observe that <math display=inline>\kappa_{k_1,\dots,k_n} (X_1,\ldots,X_n)</math> can also be written as <math display="block">\kappa_{k_1,\dots,k_n} = \left. \frac{\mathrm{d}^{k_1}}{\mathrm{d} t_{1,1} \cdots \mathrm{d} t_{1,k_1}} \cdots \frac{\mathrm{d}^{k_n}}{\mathrm{d} t_{n,1} \cdots \mathrm{d} t_{n,k_n}} G\left(\sum_{j=1}^{k_1}t_{1,j},\dots,\sum_{j=1}^{k_n}t_{n,j}\right) \right|_{t_{i,j}=0},</math> from which we conclude that <math display="block">\kappa_{k_1,\dots,k_n} (X_1,\ldots,X_n) = \kappa_{1,\ldots,1} ( \underbrace{X_1,\dots,X_1}_{k_1}, \ldots , \underbrace{X_n,\dots,X_n}_{k_n} ) .</math> For example <math display=block>\kappa_{2,0,1}(X,Y,Z) = \kappa(X,X,Z),\,</math> and <math display=block>\kappa_{0,0,n,0}(X,Y,Z,T) = \kappa_{n}(Z) = \kappa(\underbrace{Z,\dots,Z}_{n}) .\,</math> In particular, the last equality shows that the cumulants of a single random variable are the joint cumulants of multiple copies of that random variable.

Relation with mixed momentsEdit

The joint cumulant of random variables can be expressed as an alternate sum of products of their mixed moments, see Equation (3.2.7) in,<ref name="link.springer.com"/> <math display="block">\kappa(X_1,\dots,X_n)=\sum_\pi (|\pi|-1)!(-1)^{|\pi|-1}\prod_{B\in\pi}E\left(\prod_{i\in B}X_i\right)</math> where Template:Pi runs through the list of all partitions of Template:Math; where Template:Math runs through the list of all blocks of the partition Template:Pi; and where Template:Math is the number of parts in the partition.

For example, <math display=block>\kappa(X)=\operatorname E(X),</math> is the expected value of <math display="inline">X</math>, <math display=block>\kappa(X,Y)=\operatorname E(XY) - \operatorname E(X) \operatorname E(Y),</math> is the covariance of <math display="inline">X</math> and <math display="inline">Y</math>, and <math display=block>\kappa(X,Y,Z)=\operatorname E(XYZ) - \operatorname E(XY) \operatorname E(Z) - \operatorname E(XZ) \operatorname E(Y) - \operatorname E(YZ) \operatorname E(X) + 2\operatorname E(X)\operatorname E(Y)\operatorname E(Z).\,</math>

For zero-mean random variables <math display="inline">X_1,\ldots,X_n</math>, any mixed moment of the form <math display="inline">\prod_{B\in\pi} E\left(\prod_{i\in B} X_i\right)</math> vanishes if <math display="inline">\pi</math> is a partition of <math display="inline">\{ 1,\ldots,n \}</math> which contains a singleton <math display="inline">B=\{k\}</math>. Hence, the expression of their joint cumulant in terms of mixed moments simplifies. For example, if X,Y,Z,W are zero mean random variables, we have <math display=block>\kappa(X,Y,Z) = \operatorname E(XYZ).\,</math> <math display=block>\kappa(X,Y,Z,W) = \operatorname E(XYZW) - \operatorname E(XY) \operatorname E(ZW) - \operatorname E(XZ) \operatorname E(YW) - \operatorname E(XW) \operatorname E(YZ).\,</math>

More generally, any coefficient of the Maclaurin series can also be expressed in terms of mixed moments, although there are no concise formulae. Indeed, as noted above, one can write it as a joint cumulant by repeating random variables appropriately, and then apply the above formula to express it in terms of mixed moments. For example <math display=block>\kappa_{201}(X,Y,Z) = \kappa(X,X,Z)=\operatorname E(X^2Z) -2\operatorname E(XZ)\operatorname E(X) - \operatorname E(X^2)\operatorname E(Z) + 2\operatorname E(X)^2\operatorname E(Z).\,</math>

If some of the random variables are independent of all of the others, then any cumulant involving two (or more) independent random variables is zero.Template:Fact

The combinatorial meaning of the expression of mixed moments in terms of cumulants is easier to understand than that of cumulants in terms of mixed moments, see Equation (3.2.6) in:<ref name="link.springer.com"/> <math display=block>\operatorname E(X_1\cdots X_n)=\sum_\pi\prod_{B\in\pi}\kappa(X_i : i \in B). </math>

For example: <math display=block>\operatorname E(XYZ) = \kappa(X,Y,Z) + \kappa(X,Y)\kappa(Z) + \kappa(X,Z)\kappa(Y) + \kappa(Y,Z)\kappa(X) + \kappa(X)\kappa(Y)\kappa(Z).\,</math>

Further propertiesEdit

Another important property of joint cumulants is multilinearity: <math display=block>\kappa(X+Y,Z_1,Z_2,\dots) = \kappa(X,Z_1,Z_2,\ldots) + \kappa(Y,Z_1,Z_2,\ldots).\,</math>

Just as the second cumulant is the variance, the joint cumulant of just two random variables is the covariance. The familiar identity <math display=block>\operatorname{var}(X+Y) = \operatorname{var}(X) + 2\operatorname{cov}(X,Y) + \operatorname{var}(Y)\,</math> generalizes to cumulants: <math display=block>\kappa_n(X+Y)=\sum_{j=0}^n {n \choose j} \kappa( \, \underbrace{X,\dots,X}_j, \underbrace{Y,\dots,Y}_{n-j}\,).\,</math>

Conditional cumulants and the law of total cumulanceEdit

{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}} The law of total expectation and the law of total variance generalize naturally to conditional cumulants. The case Template:Math, expressed in the language of (central) moments rather than that of cumulants, says <math display=block>\mu_3(X) = \operatorname E(\mu_3(X\mid Y)) + \mu_3(\operatorname E(X\mid Y)) + 3 \operatorname{cov}(\operatorname E(X\mid Y), \operatorname{var} (X\mid Y)).</math>

In general,<ref>Template:Cite journal</ref> <math display=block>\kappa(X_1,\dots,X_n)=\sum_\pi \kappa(\kappa(X_{\pi_1}\mid Y), \dots, \kappa(X_{\pi_b}\mid Y))</math> where

Conditional cumulants and the conditional expectationEdit

For certain settings, a derivative identity can be established between the conditional cumulant and the conditional expectation. For example, suppose that Template:Math where Template:Math is standard normal independent of Template:Math, then for any Template:Math it holds that<ref>Template:Cite journal</ref> <math display=block>\kappa_{n+1}(X\mid Y=y) = \frac{ \mathrm{d}^n}{ \mathrm{d} y^n}\operatorname E(X\mid Y = y), \, n \in \mathbb{N}, \, y \in \mathbb{R}.</math> The results can also be extended to the exponential family.<ref>Template:Cite journal</ref>

Relation to statistical physicsEdit

In statistical physics many extensive quantities – that is quantities that are proportional to the volume or size of a given system – are related to cumulants of random variables. The deep connection is that in a large system an extensive quantity like the energy or number of particles can be thought of as the sum of (say) the energy associated with a number of nearly independent regions. The fact that the cumulants of these nearly independent random variables will (nearly) add make it reasonable that extensive quantities should be expected to be related to cumulants.

A system in equilibrium with a thermal bath at temperature Template:Math have a fluctuating internal energy Template:Math, which can be considered a random variable drawn from a distribution <math display=inline> E\sim p(E)</math>. The partition function of the system is <math display="block">Z(\beta) = \sum_i e^{-\beta E_i} ,</math> where [[Thermodynamic beta|Template:Math]] = Template:Math and Template:Math is the Boltzmann constant and the notation <math display=inline>\langle A \rangle</math> has been used rather than <math display=inline>\operatorname{E}[A]</math> for the expectation value to avoid confusion with the energy, Template:Math. Hence the first and second cumulant for the energy Template:Math give the average energy and heat capacity. <math display=block> \begin{align} \langle E \rangle_c & = \frac{\partial \log Z}{\partial (-\beta)} = \langle E \rangle \\[6pt] \langle E^2 \rangle_c & = \frac{\partial\langle E\rangle_c}{\partial (-\beta)} = k T^2 \frac{\partial \langle E\rangle}{\partial T} = kT^2C \end{align} </math>

The Helmholtz free energy expressed in terms of <math display=block>F(\beta) = -\beta^{-1}\log Z(\beta) \, </math> further connects thermodynamic quantities with cumulant generating function for the energy. Thermodynamics properties that are derivatives of the free energy, such as its internal energy, entropy, and specific heat capacity, all can be readily expressed in terms of these cumulants. Other free energy can be a function of other variables such as the magnetic field or chemical potential <math display=inline>\mu</math>, e.g. <math display=block>\Omega=-\beta^{-1}\log(\langle \exp(-\beta E -\beta\mu N) \rangle),\,</math> where Template:Math is the number of particles and <math display=inline>\Omega</math> is the grand potential. Again the close relationship between the definition of the free energy and the cumulant generating function implies that various derivatives of this free energy can be written in terms of joint cumulants of Template:Math and Template:Math.

HistoryEdit

The history of cumulants is discussed by Anders Hald.<ref> Hald, A. (2000) "The early history of the cumulants and the Gram–Charlier series" International Statistical Review, 68 (2): 137–153. (Reprinted in Template:Cite book)</ref><ref> Template:Cite book</ref>

Cumulants were first introduced by Thorvald N. Thiele, in 1889, who called them semi-invariants.<ref>H. Cramér (1946) Mathematical Methods of Statistics, Princeton University Press, Section 15.10, p. 186.</ref> They were first called cumulants in a 1932 paper by Ronald Fisher and John Wishart.<ref>Fisher, R.A., John Wishart, J. (1932) The derivation of the pattern formulae of two-way partitions from those of simpler patterns, Proceedings of the London Mathematical Society, Series 2, v. 33, pp. 195–208 {{#invoke:doi|main}}</ref> Fisher was publicly reminded of Thiele's work by Neyman, who also notes previous published citations of Thiele brought to Fisher's attention.<ref>Neyman, J. (1956): ‘Note on an Article by Sir Ronald Fisher,’ Journal of the Royal Statistical Society, Series B (Methodological), 18, pp. 288–94.</ref> Stephen Stigler has saidTemplate:Citation needed that the name cumulant was suggested to Fisher in a letter from Harold Hotelling. In a paper published in 1929, Fisher had called them cumulative moment functions.<ref>Template:Cite journal</ref>

The partition function in statistical physics was introduced by Josiah Willard Gibbs in 1901.Template:Citation needed The free energy is often called Gibbs free energy. In statistical mechanics, cumulants are also known as Ursell functions relating to a publication in 1927.Template:Citation needed

Cumulants in generalized settingsEdit

Formal cumulantsEdit

More generally, the cumulants of a sequence Template:Math, not necessarily the moments of any probability distribution, are, by definition, <math display=block>1+\sum_{n=1}^\infty \frac{m_n t^n}{n!} = \exp \left( \sum_{n=1}^\infty \frac{\kappa_n t^n}{n!} \right) ,</math> where the values of Template:Math for Template:Math are found formally, i.e., by algebra alone, in disregard of questions of whether any series converges. All of the difficulties of the "problem of cumulants" are absent when one works formally. The simplest example is that the second cumulant of a probability distribution must always be nonnegative, and is zero only if all of the higher cumulants are zero. Formal cumulants are subject to no such constraints.

Bell numbersEdit

In combinatorics, the Template:Mathth Bell number is the number of partitions of a set of size Template:Math. All of the cumulants of the sequence of Bell numbers are equal to 1. The Bell numbers are the moments of the Poisson distribution with expected value 1.

Cumulants of a polynomial sequence of binomial typeEdit

For any sequence Template:Math of scalars in a field of characteristic zero, being considered formal cumulants, there is a corresponding sequence Template:Math of formal moments, given by the polynomials above.Template:ClarifyTemplate:Citation needed For those polynomials, construct a polynomial sequence in the following way. Out of the polynomial <math display=block> \begin{align} \mu'_6 = {} & \kappa_6 + 6\kappa_5\kappa_1 + 15\kappa_4\kappa_2 + 15\kappa_4\kappa_1^2 + 10\kappa_3^2+60\kappa_3\kappa_2\kappa_1 + 20\kappa_3\kappa_1^3 \\ & {} + 15\kappa_2^3 + 45\kappa_2^2\kappa_1^2 + 15\kappa_2\kappa_1^4 + \kappa_1^6 \end{align} </math> make a new polynomial in these plus one additional variable Template:Math: <math display=block> \begin{align} p_6(x) = {} & \kappa_6 \,x + (6\kappa_5\kappa_1 + 15\kappa_4\kappa_2 + 10\kappa_3^2)\,x^2 + (15\kappa_4\kappa_1^2 + 60\kappa_3\kappa_2\kappa_1 + 15\kappa_2^3)\,x^3 \\ & {} + (45\kappa_2^2\kappa_1^2)\,x^4+(15\kappa_2\kappa_1^4)\,x^5 +(\kappa_1^6)\,x^6, \end{align} </math> and then generalize the pattern. The pattern is that the numbers of blocks in the aforementioned partitions are the exponents on Template:Math. Each coefficient is a polynomial in the cumulants; these are the Bell polynomials, named after Eric Temple Bell.Template:Citation needed

This sequence of polynomials is of binomial type. In fact, no other sequences of binomial type exist; every polynomial sequence of binomial type is completely determined by its sequence of formal cumulants.Template:Citation needed

Free cumulantsEdit

In the above moment-cumulant formula\ <math display=block>\operatorname E(X_1\cdots X_n)=\sum_\pi\prod_{B\,\in\,\pi}\kappa(X_i : i\in B)</math> for joint cumulants, one sums over all partitions of the set Template:Math. If instead, one sums only over the noncrossing partitions, then, by solving these formulae for the <math display=inline>\kappa</math> in terms of the moments, one gets free cumulants rather than conventional cumulants treated above. These free cumulants were introduced by Roland Speicher and play a central role in free probability theory.<ref>Template:Cite journal</ref><ref name="Novak-Śniady">Template:Cite journal</ref> In that theory, rather than considering independence of random variables, defined in terms of tensor products of algebras of random variables, one considers instead free independence of random variables, defined in terms of free products of algebras.<ref name="Novak-Śniady"/>

The ordinary cumulants of degree higher than 2 of the normal distribution are zero. The free cumulants of degree higher than 2 of the Wigner semicircle distribution are zero.<ref name="Novak-Śniady"/> This is one respect in which the role of the Wigner distribution in free probability theory is analogous to that of the normal distribution in conventional probability theory.

See alsoEdit

ReferencesEdit

Template:Reflist

External linksEdit

  • {{#invoke:Template wrapper|{{#if:|list|wrap}}|_template=cite web

|_exclude=urlname, _debug, id |url = https://mathworld.wolfram.com/{{#if:Cumulant%7CCumulant.html}} |title = Cumulant |author = Weisstein, Eric W. |website = MathWorld |access-date = |ref = Template:SfnRef }}

Template:Theory of probability distributions