Stable distribution

Revision as of 18:30, 17 March 2025 by imported>Citation bot (Added issue. | Use this bot. Report bugs. | Suggested by Dominic3203 | Linked from User:LinguisticMystic/math/probStat | #UCB_webform_linked 341/399)
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Template:Short description Template:Distinguish

Template:Probability distribution

In probability theory, a distribution is said to be stable if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters. A random variable is said to be stable if its distribution is stable. The stable distribution family is also sometimes referred to as the Lévy alpha-stable distribution, after Paul Lévy, the first mathematician to have studied it.<ref name="BM 1960">Template:Cite journal</ref><ref>Template:Cite book</ref>

Of the four parameters defining the family, most attention has been focused on the stability parameter, <math>\alpha</math> (see panel). Stable distributions have <math>0 < \alpha \leq 2</math>, with the upper bound corresponding to the normal distribution, and <math>\alpha=1</math> to the Cauchy distribution. The distributions have undefined variance for <math>\alpha < 2</math>, and undefined mean for <math>\alpha \leq 1</math>. The importance of stable probability distributions is that they are "attractors" for properly normed sums of independent and identically distributed (iid) random variables. The normal distribution defines a family of stable distributions. By the classical central limit theorem the properly normed sum of a set of random variables, each with finite variance, will tend toward a normal distribution as the number of variables increases. Without the finite variance assumption, the limit may be a stable distribution that is not normal. Mandelbrot referred to such distributions as "stable Paretian distributions",<ref>Template:Cite journal</ref><ref>Template:Cite journal</ref><ref>Template:Cite journal</ref> after Vilfredo Pareto. In particular, he referred to those maximally skewed in the positive direction with <math>1 < \alpha < 2</math> as "Pareto–Lévy distributions",<ref name="BM 1960"/> which he regarded as better descriptions of stock and commodity prices than normal distributions.<ref name="BM 1963">Template:Cite journal</ref>

DefinitionEdit

A non-degenerate distribution is a stable distribution if it satisfies the following property:

Template:Block indent

Since the normal distribution, the Cauchy distribution, and the Lévy distribution all have the above property, it follows that they are special cases of stable distributions.

Such distributions form a four-parameter family of continuous probability distributions parametrized by location and scale parameters μ and c, respectively, and two shape parameters <math>\beta</math> and <math>\alpha</math>, roughly corresponding to measures of asymmetry and concentration, respectively (see the figures).

The characteristic function <math>\varphi(t) </math> of any probability distribution is the Fourier transform of its probability density function <math>f(x) </math>. The density function is therefore the inverse Fourier transform of the characteristic function:<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> <math display="block"> \varphi(t) = \int_{- \infty}^\infty f(x)e^{ ixt}\,dx. </math>

Although the probability density function for a general stable distribution cannot be written analytically, the general characteristic function can be expressed analytically. A random variable X is called stable if its characteristic function can be written as<ref name=":0" /><ref name=":1">Template:Cite book</ref> <math display="block"> \varphi(t; \alpha, \beta, c, \mu) = \exp \left ( i t \mu - |c t|^\alpha \left ( 1 - i \beta \sgn(t) \Phi \right ) \right ) </math> where Template:Math is just the sign of Template:Mvar and <math display="block"> \Phi = \begin{cases} \tan \left (\frac{\pi \alpha}{2} \right) & \alpha \neq 1 \\ - \frac{2}{\pi}\log|t| & \alpha = 1 \end{cases} </math> μR is a shift parameter, <math>\beta \in [-1,1]</math>, called the skewness parameter, is a measure of asymmetry. Notice that in this context the usual skewness is not well defined, as for <math>\alpha < 2</math> the distribution does not admit 2nd or higher moments, and the usual skewness definition is the 3rd central moment.

The reason this gives a stable distribution is that the characteristic function for the sum of two independent random variables equals the product of the two corresponding characteristic functions. Adding two random variables from a stable distribution gives something with the same values of <math>\alpha</math> and <math>\beta</math>, but possibly different values of μ and c.

Not every function is the characteristic function of a legitimate probability distribution (that is, one whose cumulative distribution function is real and goes from 0 to 1 without decreasing), but the characteristic functions given above will be legitimate so long as the parameters are in their ranges. The value of the characteristic function at some value t is the complex conjugate of its value at −t as it should be so that the probability distribution function will be real.

In the simplest case <math>\beta = 0</math>, the characteristic function is just a stretched exponential function; the distribution is symmetric about μ and is referred to as a (Lévy) symmetric alpha-stable distribution, often abbreviated SαS.

When <math>\alpha < 1</math> and <math>\beta = 1</math>, the distribution is supported on [μ, ∞).

The parameter c > 0 is a scale factor which is a measure of the width of the distribution while <math>\alpha</math> is the exponent or index of the distribution and specifies the asymptotic behavior of the distribution.

ParametrizationsEdit

The parametrization of stable distributions is not unique. Nolan <ref name="Nolan2020" /> tabulates 11 parametrizations seen in the literature and gives conversion formulas. The two most commonly used parametrizations are the one above (Nolan's "1") and the one immediately below (Nolan's "0").

The parametrization above is easiest to use for theoretical work, but its probability density is not continuous in the parameters at <math>\alpha =1</math>.<ref name="Nolan 1997"/> A continuous parametrization, better for numerical work, is<ref name=":0" /> <math display="block"> \varphi(t; \alpha, \beta, \gamma, \delta) = \exp \left (i t \delta - |\gamma t|^\alpha \left (1 - i \beta \sgn(t) \Phi \right ) \right ) </math> where: <math display="block"> \Phi = \begin{cases} \left ( |\gamma t|^{1 - \alpha} - 1 \right ) \tan \left (\tfrac{\pi \alpha}{2} \right ) & \alpha \neq 1 \\ - \frac{2}{\pi} \log|\gamma t| & \alpha = 1 \end{cases} </math>

The ranges of <math>\alpha</math> and <math>\beta</math> are the same as before, γ (like c) should be positive, and δ (like μ) should be real.

In either parametrization one can make a linear transformation of the random variable to get a random variable whose density is <math> f(y; \alpha, \beta, 1, 0) </math>. In the first parametrization, this is done by defining the new variable: <math display="block"> y = \begin{cases} \frac{x - \mu}\gamma & \alpha \neq 1 \\ \frac{x - \mu}\gamma - \beta\frac 2\pi\ln\gamma & \alpha = 1 \end{cases} </math>

For the second parametrization, simply use <math display="block"> y = \frac{x-\delta}\gamma </math> independent of <math>\alpha</math>. In the first parametrization, if the mean exists (that is, <math>\alpha > 1</math>) then it is equal to μ, whereas in the second parametrization when the mean exists it is equal to <math> \delta - \beta \gamma \tan \left (\tfrac{\pi\alpha}{2} \right).</math>

The distributionEdit

A stable distribution is therefore specified by the above four parameters. It can be shown that any non-degenerate stable distribution has a smooth (infinitely differentiable) density function.<ref name=":0" /> If <math> f(x; \alpha, \beta, c, \mu) </math> denotes the density of X and Y is the sum of independent copies of X: <math display="block"> Y = \sum_{i = 1}^N k_i (X_i - \mu)</math> then Y has the density <math> \tfrac{1}{s} f(y / s; \alpha, \beta, c, 0) </math> with <math display="block"> s = \left(\sum_{i = 1}^N |k_i|^\alpha \right )^{\frac{1}{\alpha}} </math>

The asymptotic behavior is described, for <math>\alpha < 2</math>, by:<ref name=":0" /> <math display="block"> f(x) \sim \frac{1}{|x|^{1 + \alpha}} \left (c^\alpha (1 + \sgn(x) \beta) \sin \left (\frac{\pi \alpha}{2} \right ) \frac{\Gamma(\alpha + 1) }{\pi} \right ) </math> where Γ is the Gamma function (except that when <math>\alpha \geq 1</math> and <math>\beta = \pm 1</math>, the tail does not vanish to the left or right, resp., of μ, although the above expression is 0). This "heavy tail" behavior causes the variance of stable distributions to be infinite for all <math>\alpha <2</math>. This property is illustrated in the log–log plots below.

When <math>\alpha = 2</math>, the distribution is Gaussian (see below), with tails asymptotic to exp(−x2/4c2)/(2cTemplate:Radic).

One-sided stable distribution and stable count distributionEdit

When <math>\alpha < 1</math> and <math>\beta = 1</math>, the distribution is supported on [μ, ∞). This family is called one-sided stable distribution.<ref name="PhysRevLett 1007">Template:Cite journal</ref> Its standard distribution (μ = 0) is defined as

<math>L_\alpha(x) = f\left(x;\alpha,1,\cos\left(\frac{\alpha\pi}{2}\right)^{1/\alpha},0\right)</math>, where <math>\alpha < 1.</math>

Let <math>q = \exp(-i\alpha\pi/2)</math>, its characteristic function is <math> \varphi(t;\alpha) = \exp\left (- q|t|^\alpha \right ) </math>. Thus the integral form of its PDF is (note: <math>\operatorname{Im}(q)<0</math>) <math display="block"> \begin{align} L_\alpha(x) & = \frac{1}{\pi}\Re\left[ \int_{-\infty}^\infty e^{itx}e^{-q|t|^\alpha}\,dt\right] \\ & = \frac{2}{\pi} \int_0^\infty e^{-\operatorname{Re}(q)\,t^\alpha}

      \sin(tx)\sin(-\operatorname{Im}(q)\,t^\alpha) \,dt, \text{ or }

\\ & = \frac{2}{\pi} \int_0^\infty e^{-\text{Re}(q)\,t^\alpha}

      \cos(tx)\cos(\operatorname{Im}(q)\,t^\alpha) \,dt .

\end{align}</math>

The double-sine integral is more effective for very small <math> x</math>.

Consider the Lévy sum <math display="inline">Y = \sum_{i=1}^N X_i</math> where <math display="inline">X_i \sim L_\alpha(x)</math>, then Y has the density <math display="inline">\frac{1}{\nu} L_\alpha \left(\frac{x}{\nu}\right)</math> where <math display="inline">\nu = N^{1/\alpha}</math>. Set <math display="inline">x = 1</math> to arrive at the stable count distribution.<ref name=":4" /> Its standard distribution is defined as

<math>\mathfrak{N}_\alpha(\nu)=\frac \alpha {\Gamma\left(\frac{1}{\alpha}\right)} \frac1\nu L_\alpha \left(\frac{1}{\nu} \right), \text{ where } \nu > 0 \text{ and } \alpha < 1.</math>

The stable count distribution is the conjugate prior of the one-sided stable distribution. Its location-scale family is defined as

<math>\mathfrak{N}_\alpha(\nu;\nu_0,\theta) = \frac \alpha {\Gamma(\frac{1}{\alpha})} \frac{1}{\nu-\nu_0} L_\alpha \left(\frac{\theta}{\nu-\nu_0}\right), \text{ where } \nu > \nu_0</math>, <math>\theta > 0, \text{ and } \alpha < 1.</math>

It is also a one-sided distribution supported on <math>[\nu_0,\infty)</math>. The location parameter <math>\nu_0</math> is the cut-off location, while <math>\theta</math> defines its scale.

When <math display="inline">\alpha = \frac{1}{2}</math>, <math display="inline">L_{\frac{1}{2}}(x)</math> is the Lévy distribution which is an inverse gamma distribution. Thus <math>\mathfrak{N}_{\frac{1}{2}}(\nu; \nu_0, \theta)</math> is a shifted gamma distribution of shape 3/2 and scale <math>4\theta</math>,

<math>\mathfrak{N}_{\frac{1}{2}}(\nu;\nu_0,\theta) = \frac{1}{4\sqrt{\pi}\theta^{3/2}}

(\nu-\nu_0)^{1/2} e^{-\frac{\nu-\nu_0}{4\theta}}, \text{ where } \nu > \nu_0, \qquad \theta > 0.</math>

Its mean is <math>\nu_0 + 6\theta</math> and its standard deviation is <math>\sqrt{24}\theta</math>. It is hypothesized that VIX is distributed like <math display="inline">\mathfrak{N}_{\frac{1}{2}}(\nu;\nu_0,\theta)</math> with <math>\nu_0 = 10.4</math> and <math>\theta = 1.6</math> (See Section 7 of <ref name=":4" />). Thus the stable count distribution is the first-order marginal distribution of a volatility process. In this context, <math>\nu_0</math> is called the "floor volatility".

Another approach to derive the stable count distribution is to use the Laplace transform of the one-sided stable distribution, (Section 2.4 of <ref name=":4" />)

<math>\int_0^\infty e^{-z x} L_\alpha(x) dx = e^{-z^\alpha}, \text{ where } > \alpha<1. </math>

Let <math>x = 1 / \nu</math>, and one can decompose the integral on the left hand side as a product distribution of a standard Laplace distribution and a standard stable count distribution,

<math>\int_0^\infty \frac{1}{\nu} \left ( \frac{1}{2} e^{-\frac{|z|}{\nu} }\right )

\left (\frac{\alpha}{\Gamma(\frac{1}{\alpha})} \frac{1}{\nu} L_\alpha \left(\frac{1}{\nu}\right) \right ) \, d\nu = \frac{1}{2} \frac{\alpha}{\Gamma(\frac{1}{\alpha})} e^{-|z|^\alpha}, \text{ where } \alpha<1. </math>

This is called the "lambda decomposition" (See Section 4 of <ref name=":4" />) since the right hand side was named as "symmetric lambda distribution" in Lihn's former works. However, it has several more popular names such as "exponential power distribution", or the "generalized error/normal distribution", often referred to when <math>\alpha > 1</math>.

The n-th moment of <math>\mathfrak{N}_\alpha(\nu)</math> is the <math>-(n + 1)</math>-th moment of <math>L_\alpha(x)</math>, and all positive moments are finite.

PropertiesEdit

Stable distributions are closed under convolution for a fixed value of <math>\alpha</math>. Since convolution is equivalent to multiplication of the Fourier-transformed function, it follows that the product of two stable characteristic functions with the same <math>\alpha</math> will yield another such characteristic function. The product of two stable characteristic functions is given by: <math display="block">\exp\left (it\mu_1+it\mu_2 - |c_1 t|^\alpha - |c_2 t|^\alpha +i\beta_1|c_1 t|^\alpha\sgn(t)\Phi + i\beta_2|c_2 t|^\alpha\sgn(t)\Phi \right )</math>

Since Template:Math is not a function of the μ, c or <math>\beta</math> variables it follows that these parameters for the convolved function are given by: <math display="block">\begin{align} \mu &=\mu_1+\mu_2 \\ c &= \left (c_1^\alpha+c_2^\alpha \right )^{\frac{1}{\alpha}} \\[6pt] \beta &= \frac{\beta_1 c_1^\alpha+\beta_2c_2^\alpha}{c_1^\alpha+c_2^\alpha} \end{align}</math>

In each case, it can be shown that the resulting parameters lie within the required intervals for a stable distribution.

The Generalized Central Limit TheoremEdit

The Generalized Central Limit Theorem (GCLT) was an effort of multiple mathematicians (Berstein, Lindeberg, Lévy, Feller, Kolmogorov, and others) over the period from 1920 to 1937. <ref>Template:Cite journal</ref> The first published complete proof (in French) of the GCLT was in 1937 by Paul Lévy.<ref>Template:Cite book</ref> An English language version of the complete proof of the GCLT is available in the translation of Gnedenko and Kolmogorov's 1954 book.<ref>Template:Cite book</ref>

The statement of the GCLT is as follows:<ref name ="Nolan2020">Template:Cite book</ref>

A non-degenerate random variable Z is α-stable for some 0 < α ≤ 2 if and only if there is an independent, identically distributed sequence of random variables X1, X2, X3, ... and constants an > 0, bn ∈ ℝ with
an (X1 + ... + Xn) − bnZ.
Here → means the sequence of random variable sums converges in distribution; i.e., the corresponding distributions satisfy Fn(y) → F(y) at all continuity points of F.

In other words, if sums of independent, identically distributed random variables converge in distribution to some Z, then Z must be a stable distribution.

Special casesEdit

File:Levy LdistributionPDF.png
Log-log plot of symmetric centered stable distribution PDFs showing the power law behavior for large x. The power law behavior is evidenced by the straight-line appearance of the PDF for large x, with the slope equal to <math>-(\alpha+1)</math>. (The only exception is for <math>\alpha = 2</math>, in black, which is a normal distribution.)
File:Levyskew LdistributionPDF.png
Log-log plot of skewed centered stable distribution PDFs showing the power law behavior for large x. Again the slope of the linear portions is equal to <math>-(\alpha+1)</math>

There is no general analytic solution for the form of f(x). There are, however, three special cases which can be expressed in terms of elementary functions as can be seen by inspection of the characteristic function:<ref name=":0" /><ref name=":1" /><ref>Template:Cite book</ref>

  • For <math>\alpha = 2</math> the distribution reduces to a Gaussian distribution with variance σ2 = 2c2 and mean μ; the skewness parameter <math>\beta</math> has no effect.
  • For <math>\alpha = 1</math> and <math>\beta = 0</math> the distribution reduces to a Cauchy distribution with scale parameter c and shift parameter μ.
  • For <math>\alpha = 1/2</math> and <math>\beta = 1</math> the distribution reduces to a Lévy distribution with scale parameter c and shift parameter μ.

Note that the above three distributions are also connected, in the following way: A standard Cauchy random variable can be viewed as a mixture of Gaussian random variables (all with mean zero), with the variance being drawn from a standard Lévy distribution. And in fact this is a special case of a more general theorem (See p. 59 of <ref name=":2">Template:Cite book</ref>) which allows any symmetric alpha-stable distribution to be viewed in this way (with the alpha parameter of the mixture distribution equal to twice the alpha parameter of the mixing distribution—and the beta parameter of the mixing distribution always equal to one).

A general closed form expression for stable PDFs with rational values of <math>\alpha</math> is available in terms of Meijer G-functions.<ref>Template:Cite journal</ref> Fox H-Functions can also be used to express the stable probability density functions. For simple rational numbers, the closed form expression is often in terms of less complicated special functions. Several closed form expressions having rather simple expressions in terms of special functions are available. In the table below, PDFs expressible by elementary functions are indicated by an E and those that are expressible by special functions are indicated by an s.<ref name=":2" />

<math>\alpha</math>
1/3 1/2 2/3 1 4/3 3/2 2
<math>\beta</math> 0 s s s E s s E
1 s E s L s

Some of the special cases are known by particular names:

  • For <math>\alpha = 1</math> and <math>\beta = 1</math>, the distribution is a Landau distribution (L) which has a specific usage in physics under this name.
  • For <math>\alpha = 3/2</math> and <math>\beta = 0</math> the distribution reduces to a Holtsmark distribution with scale parameter c and shift parameter μ.

Also, in the limit as c approaches zero or as α approaches zero the distribution will approach a Dirac delta function Template:Math.

Series representationEdit

The stable distribution can be restated as the real part of a simpler integral:<ref name=":3">Template:Cite journal</ref> <math display="block">f(x;\alpha,\beta,c,\mu)=\frac{1}{\pi}\Re\left[ \int_0^\infty e^{it(x-\mu)}e^{-(ct)^\alpha(1-i\beta\Phi)}\,dt\right].</math>

Expressing the second exponential as a Taylor series, this leads to: <math display="block">f(x;\alpha,\beta,c,\mu)=\frac{1}{\pi}\Re\left[ \int_0^\infty e^{it(x-\mu)}\sum_{n=0}^\infty\frac{(-qt^\alpha)^n}{n!}\,dt\right]</math> where <math>q=c^\alpha(1-i\beta\Phi)</math>. Reversing the order of integration and summation, and carrying out the integration yields: <math display="block">f(x;\alpha,\beta,c,\mu)=\frac{1}{\pi}\Re\left[ \sum_{n=1}^\infty\frac{(-q)^n}{n!}\left(\frac{i}{x-\mu}\right)^{\alpha n+1}\Gamma(\alpha n+1)\right]</math> which will be valid for x ≠ μ and will converge for appropriate values of the parameters. (Note that the n = 0 term which yields a delta function in x − μ has therefore been dropped.) Expressing the first exponential as a series will yield another series in positive powers of x − μ which is generally less useful.

For one-sided stable distribution, the above series expansion needs to be modified, since <math>q=\exp(-i\alpha\pi/2)</math> and <math>q i^{\alpha}=1</math>. There is no real part to sum. Instead, the integral of the characteristic function should be carried out on the negative axis, which yields:<ref>Template:Cite journal</ref><ref name="PhysRevLett 1007"/> <math display="block">\begin{align} L_\alpha(x) & = \frac{1}{\pi}\Re\left[ \sum_{n=1}^\infty\frac{(-q)^n}{n!}\left(\frac{-i}{x}\right)^{\alpha n+1}\Gamma(\alpha n+1)\right] \\ & = \frac{1}{\pi}\sum_{n=1}^\infty\frac{-\sin(n(\alpha+1)\pi)}{n!}\left(\frac{1}{x}\right)^{\alpha n+1}\Gamma(\alpha n+1) \end{align} </math>

Parameter estimationEdit

In addition to the existing tests for normality and subsequent parameter estimation, a general method which relies on the quantiles was developed by McCulloch and works for both symmetric and skew stable distributions and stability parameter <math>0.5 < \alpha \leq 2</math>.<ref>Template:Cite journal</ref>

Simulation of stable variatesEdit

There are no analytic expressions for the inverse <math>F^{-1}(x)</math> nor the CDF <math>F(x)</math> itself, so the inversion method cannot be used to generate stable-distributed variates.<ref name ="Nolan 1997">Template:Cite journal</ref><ref name=":4">Template:Cite journal</ref> Other standard approaches like the rejection method would require tedious computations. An elegant and efficient solution was proposed by Chambers, Mallows and Stuck (CMS),<ref>Template:Cite journal</ref> who noticed that a certain integral formula<ref>Template:Cite book</ref> yielded the following algorithm:<ref>Template:Cite book</ref>

  • generate a random variable <math>U</math> uniformly distributed on <math>\left (-\tfrac{\pi}{2},\tfrac{\pi}{2} \right )</math> and an independent exponential random variable <math>W</math> with mean 1;
  • for <math>\alpha\ne 1</math> compute: <math display="block">X = \left (1+\zeta^2 \right )^\frac{1}{2\alpha} \frac{\sin ( \alpha(U+\xi)) }{ (\cos(U))^{\frac{1}{\alpha}}} \left (\frac{\cos (U - \alpha(U+\xi)) }{W} \right )^\frac{1-\alpha}{\alpha},</math>
  • for <math>\alpha=1</math> compute: <math display="block">X = \frac{1}{\xi}\left\{\left(\frac{\pi}{2}+\beta U \right)\tan U- \beta\log\left(\frac{\frac{\pi}{2} W\cos U}{\frac{\pi}{2}+\beta U}\right)\right\},</math> where <math display="block">\zeta = -\beta\tan\frac{\pi\alpha}{2}, \qquad \xi =\begin{cases}

\frac{1}{\alpha} \arctan(-\zeta) & \alpha \ne 1 \\ \frac{\pi}{2} & \alpha=1 \end{cases}</math>

This algorithm yields a random variable <math>X\sim S_\alpha(\beta,1,0)</math>. For a detailed proof see.<ref>Template:Cite journal</ref>

To simulate a stable random variable for all admissible values of the parameters <math>\alpha</math>, <math>c</math>, <math>\beta</math> and <math>\mu</math> use the following property: If <math>X \sim S_\alpha(\beta,1,0)</math> then <math display="block">Y = \begin{cases} c X+\mu & \alpha \ne 1 \\ c X+\frac{2}{\pi}\beta c\log c + \mu & \alpha = 1 \end{cases}</math> is <math>S_\alpha(\beta,c,\mu)</math>. For <math>\alpha = 2</math> (and <math>\beta = 0</math>) the CMS method reduces to the well known Box-Muller transform for generating Gaussian random variables.<ref>Template:Cite book</ref> While other approaches have been proposed in the literature, including application of Bergström<ref>Template:Cite journal</ref> and LePage<ref>Template:Cite journal</ref> series expansions, the CMS method is regarded as the fastest and the most accurate.

ApplicationsEdit

Stable distributions owe their importance in both theory and practice to the generalization of the central limit theorem to random variables without second (and possibly first) order moments and the accompanying self-similarity of the stable family. It was the seeming departure from normality along with the demand for a self-similar model for financial data (i.e. the shape of the distribution for yearly asset price changes should resemble that of the constituent daily or monthly price changes) that led Benoît Mandelbrot to propose that cotton prices follow an alpha-stable distribution with <math>\alpha</math> equal to 1.7.<ref name="BM 1963"/> Lévy distributions are frequently found in analysis of critical behavior and financial data.<ref name=":1" /><ref>Template:Cite book</ref>

They are also found in spectroscopy as a general expression for a quasistatically pressure broadened spectral line.<ref name=":3" />

The Lévy distribution of solar flare waiting time events (time between flare events) was demonstrated for CGRO BATSE hard x-ray solar flares in December 2001. Analysis of the Lévy statistical signature revealed that two different memory signatures were evident; one related to the solar cycle and the second whose origin appears to be associated with a localized or combination of localized solar active region effects.<ref>Leddon, D., A statistical Study of Hard X-Ray Solar Flares</ref>

Other analytic casesEdit

A number of cases of analytically expressible stable distributions are known. Let the stable distribution be expressed by <math>f(x;\alpha,\beta,c,\mu)</math>, then:

  • The Cauchy Distribution is given by <math>f(x;1,0,1,0).</math>
  • The Lévy distribution is given by <math>f(x;\tfrac{1}{2},1,1,0).</math>
  • The Normal distribution is given by <math>f(x;2,0,1,0).</math>
  • Let <math>S_{\mu,\nu}(z)</math> be a Lommel function, then:<ref name="Garoni2002">Template:Cite journal</ref> <math display="block"> f \left (x;\tfrac{1}{3},0,1,0\right ) = \Re\left ( \frac{2e^{- \frac{i \pi}{4}}}{3 \sqrt{3} \pi} \frac{1}{\sqrt{x^3}} S_{0,\frac{1}{3}} \left (\frac{2e^{\frac{i \pi}{4}}}{3 \sqrt{3}} \frac{1}{\sqrt{x}} \right) \right )</math>
  • Let <math>S(x)</math> and <math>C(x)</math> denote the Fresnel integrals, then:<ref name="Hopcraft1999">Template:Cite journal</ref> <math display="block">f\left (x;\tfrac{1}{2},0,1,0\right ) = \frac{1}{{\sqrt{2\pi|x|^3}}}\left (\sin\left(\tfrac{1}{4|x|}\right) \left [\frac{1}{2} - S\left (\tfrac{1}{\sqrt{2\pi|x|}}\right )\right ]+\cos\left(\tfrac{1}{4|x|} \right) \left [\frac{1}{2}-C\left (\tfrac{1}{\sqrt{2\pi|x|}}\right )\right ]\right )</math>
  • Let <math>K_v(x)</math> be the modified Bessel function of the second kind, then:<ref name="Hopcraft1999"/> <math display="block">f\left (x;\tfrac{1}{3},1,1,0\right ) = \frac{1}{\pi} \frac{2\sqrt{2}}{3^{\frac{7}{4}}} \frac{1}{\sqrt{x^3}} K_{\frac{1}{3}}\left (\frac{4\sqrt{2}}{3^{\frac{9}{4}}} \frac{1}{\sqrt{x}} \right )</math>
  • Let <math>{}_mF_n</math> denote the hypergeometric functions, then:<ref name="Garoni2002"/> <math display="block">\begin{align}
f\left (x;\tfrac{4}{3},0,1,0\right ) &= \frac{3^{\frac{5}{4}}}{4 \sqrt{2 \pi}} \frac{\Gamma \left (\tfrac{7}{12} \right ) \Gamma \left (\tfrac{11}{12} \right )}{\Gamma\left (\tfrac{6}{12} \right ) \Gamma \left (\tfrac{8}{12} \right )} {}_2F_2 \left ( \tfrac{7}{12}, \tfrac{11}{12}; \tfrac{6}{12}, \tfrac{8}{12}; \tfrac{3^3 x^4}{4^4} \right ) - \frac{3^{\frac{11}{4}}x^3}{4^3 \sqrt{2 \pi}} \frac{\Gamma \left (\tfrac{13}{12} \right ) \Gamma \left (\tfrac{17}{12} \right )}{\Gamma \left (\tfrac{18}{12} \right ) \Gamma \left (\tfrac{15}{12} \right )} {}_2F_2 \left ( \tfrac{13}{12}, \tfrac{17}{12}; \tfrac{18}{12}, \tfrac{15}{12}; \tfrac{3^3 x^4}{4^4} \right ) \\[6pt]

f\left (x;\tfrac{3}{2},0,1,0\right ) &= \frac{\Gamma \left(\tfrac{5}{3} \right)}{\pi} {}_2F_3 \left ( \tfrac{5}{12}, \tfrac{11}{12}; \tfrac{1}{3}, \tfrac{1}{2}, \tfrac{5}{6}; - \tfrac{2^2 x^6}{3^6} \right ) - \frac{x^2}{3 \pi} {}_3F_4 \left ( \tfrac{3}{4}, 1, \tfrac{5}{4}; \tfrac{2}{3}, \tfrac{5}{6}, \tfrac{7}{6}, \tfrac{4}{3}; - \tfrac{2^2 x^6}{3^6} \right ) + \frac{7 x^4\Gamma \left(\tfrac{4}{3} \right)}{3^4 \pi ^ 2} {}_2F_3 \left ( \tfrac{13}{12}, \tfrac{19}{12}; \tfrac{7}{6}, \tfrac{3}{2}, \tfrac{5}{3}; -\tfrac{2^2 x^6}{3^6} \right) \end{align}</math> with the latter being the Holtsmark distribution.

f\left (x;\tfrac{2}{3},0,1,0\right ) &= \frac{\sqrt{3}}{6\sqrt{\pi}|x|} \exp\left (\tfrac{2}{27}x^{-2}\right ) W_{-\frac{1}{2},\frac{1}{6}}\left (\tfrac{4}{27}x^{-2}\right ) \\[8pt] f\left (x;\tfrac{2}{3},1,1,0\right ) &= \frac{\sqrt{3}}{\sqrt{\pi}|x|} \exp\left (-\tfrac{16}{27}x^{-2}\right ) W_{\frac{1}{2},\frac{1}{6}} \left (\tfrac{32}{27}x^{-2}\right ) \\[8pt] f\left (x;\tfrac{3}{2},1,1,0\right ) &= \begin{cases} \frac{\sqrt{3}}{\sqrt{\pi}|x|} \exp\left (\frac{1}{27}x^3\right ) W_{\frac{1}{2},\frac{1}{6}}\left (- \frac{2}{27}x^3\right ) & x<0\\ {} \\ \frac{\sqrt{3}}{6\sqrt{\pi}|x|} \exp\left (\frac{1}{27}x^3\right ) W_{-\frac{1}{2},\frac{1}{6}}\left (\frac{2}{27}x^3\right ) & x \geq 0 \end{cases} \end{align}</math>

See alsoEdit

Software implementationsEdit

  • The STABLE program for Windows is available from John Nolan's stable webpage: http://www.robustanalysis.com/public/stable.html. It calculates the density (pdf), cumulative distribution function (cdf) and quantiles for a general stable distribution, and performs maximum likelihood estimation of stable parameters and some exploratory data analysis techniques for assessing the fit of a data set.
  • The GNU Scientific Library which is written in C has a package randist, which includes among the Gaussian and Cauchy distributions also an implementation of the Levy alpha-stable distribution, both with and without a skew parameter.
  • libstable is a C implementation for the Stable distribution pdf, cdf, random number, quantile and fitting functions (along with a benchmark replication package and an R package).
  • R Package 'stabledist' by Diethelm Wuertz, Martin Maechler and Rmetrics core team members. Computes stable density, probability, quantiles, and random numbers.
  • Python implementation is located in scipy.stats.levy_stable in the SciPy package.
  • Julia provides package StableDistributions.jl which has methods of generation, fitting, probability density, cumulative distribution function, characteristic and moment generating functions, quantile and related functions, convolution and affine transformations of stable distributions. It uses modernised algorithms improved by John P. Nolan.<ref name="Nolan2020" />

ReferencesEdit

Template:Reflist

Template:ProbDistributions