Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Mixture distribution
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Probability distribution}} {{See also|Mixture model|Compound probability distribution}} In [[probability]] and [[statistics]], a '''mixture distribution''' is the [[probability distribution]] of a [[random variable]] that is derived from a collection of other random variables as follows: first, a random variable is selected by chance from the collection according to given probabilities of selection, and then the value of the selected random variable is realized. The underlying random variables may be random real numbers, or they may be [[random vector]]s (each having the same dimension), in which case the mixture distribution is a [[multivariate distribution]]. In cases where each of the underlying random variables is [[Continuous random variable|continuous]], the outcome variable will also be continuous and its [[probability density function]] is sometimes referred to as a '''mixture density'''. The [[cumulative distribution function]] (and the [[probability density function]] if it exists) can be expressed as a [[convex combination]] (i.e. a weighted sum, with non-negative weights that sum to 1) of other distribution functions and density functions. The individual distributions that are combined to form the mixture distribution are called the '''mixture components''', and the probabilities (or weights) associated with each component are called the '''mixture weights'''. The number of components in a mixture distribution is often restricted to being finite, although in some cases the components may be [[countable|countably infinite]] in number. More general cases (i.e. an [[uncountable]] set of component distributions), as well as the countable case, are treated under the title of '''[[compound probability distribution|compound distributions]]'''. A distinction needs to be made between a [[random variable]] whose distribution function or density is the sum of a set of components (i.e. a mixture distribution) and a random variable whose value is the sum of the values of two or more underlying random variables, in which case the distribution is given by the [[convolution]] operator. As an example, the sum of two [[Multivariate normal distribution|jointly normally distributed]] random variables, each with different means, will still have a normal distribution. On the other hand, a mixture density created as a mixture of two normal distributions with different means will have two peaks provided that the two means are far enough apart, showing that this distribution is radically different from a normal distribution. Mixture distributions arise in many contexts in the literature and arise naturally where a [[statistical population]] contains two or more [[subpopulation]]s. They are also sometimes used as a means of representing non-normal distributions. Data analysis concerning [[statistical model]]s involving mixture distributions is discussed under the title of [[mixture model]]s, while the present article concentrates on simple probabilistic and statistical properties of mixture distributions and how these relate to properties of the underlying distributions. == Finite and countable mixtures == [[Image:Gaussian-mixture-example.svg|thumb|Density of a mixture of three normal distributions ({{math|1=''μ'' = 5, 10, 15}}, {{math|1=''σ'' = 2}}) with equal weights. Each component is shown as a weighted density (each integrating to 1/3)]] Given a finite set of probability density functions {{math|''p''<sub>1</sub>(''x'')}}, ..., {{math|''p<sub>n</sub>''(''x'')}}, or corresponding cumulative distribution functions {{math|''P''<sub>1</sub>(''x''),}} ..., {{math|''P<sub>n</sub>''(''x'')}} and '''weights''' {{math|''w''<sub>1</sub>}}, ..., {{math|''w<sub>n</sub>''}} such that {{math|''w<sub>i</sub>'' ≥ 0}} and {{math|1=∑''w<sub>i</sub>'' = 1}}, the mixture distribution can be represented by writing either the density, {{math|''f''}}, or the distribution function, {{math|''F''}}, as a sum (which in both cases is a convex combination): <math display="block"> F(x) = \sum_{i=1}^n \, w_i \, P_i(x), </math> <math display="block"> f(x) = \sum_{i=1}^n \, w_i \, p_i(x) .</math> This type of mixture, being a finite sum, is called a '''finite mixture,''' and in applications, an unqualified reference to a "mixture density" usually means a finite mixture. The case of a countably infinite set of components is covered formally by allowing <math> n = \infty\!</math>. == Uncountable mixtures == {{Main article|Compound distribution}} Where the set of component distributions is [[uncountable]], the result is often called a [[compound probability distribution]]. The construction of such distributions has a formal similarity to that of mixture distributions, with either infinite summations or integrals replacing the finite summations used for finite mixtures. Consider a probability density function {{math|''p''(''x'';''a'')}} for a variable {{mvar|x}}, parameterized by {{mvar|a}}. That is, for each value of {{mvar|a}} in some set {{mvar|A}}, {{math|''p''(''x'';''a'')}} is a probability density function with respect to {{mvar|x}}. Given a probability density function {{mvar|w}} (meaning that {{mvar|w}} is nonnegative and integrates to 1), the function <math display="block"> f(x) = \int_A \, w(a) \, p(x;a) \, da </math> is again a probability density function for {{mvar|x}}. A similar integral can be written for the cumulative distribution function. Note that the formulae here reduce to the case of a finite or infinite mixture if the density {{mvar|w}} is allowed to be a [[generalized function]] representing the "derivative" of the cumulative distribution function of a [[discrete distribution]]. ==Mixtures within a parametric family== The mixture components are often not arbitrary probability distributions, but instead are members of a [[parametric family]] (such as normal distributions), with different values for a parameter or parameters. In such cases, assuming that it exists, the density can be written in the form of a sum as: <math display="block"> f(x; a_1, \ldots , a_n) = \sum_{i=1}^n \, w_i \, p(x;a_i) </math> for one parameter, or <math display="block"> f(x; a_1, \ldots , a_n, b_1, \ldots , b_n) = \sum_{i=1}^n \, w_i \, p(x;a_i,b_i) </math> for two parameters, and so forth. == Properties == === Convexity === A general [[linear combination]] of probability density functions is not necessarily a probability density, since it may be negative or it may integrate to something other than 1. However, a [[convex combination]] of probability density functions preserves both of these properties (non-negativity and integrating to 1), and thus mixture densities are themselves probability density functions. === Moments === Let {{math|''X''<sub>1</sub>}}, ..., {{math|''X''<sub>''n''</sub>}} denote random variables from the {{mvar|n}} component distributions, and let {{mvar|X}} denote a random variable from the mixture distribution. Then, for any function {{math|''H''(·)}} for which <math>\operatorname{E}[H(X_i)]</math> exists, and assuming that the component densities {{math|''p<sub>i</sub>''(''x'')}} exist, <math display="block">\begin{align} \operatorname{E}[H(X)] & = \int_{-\infty}^\infty H(x) \sum_{i = 1}^n w_i p_i(x) \, dx \\ & = \sum_{i = 1}^n w_i \int_{-\infty}^\infty p_i(x) H(x) \, dx = \sum_{i = 1}^n w_i \operatorname{E}[H(X_i)]. \end{align}</math> The {{mvar|j}}th moment about zero (i.e. choosing {{math|1=''H''(''x'') = ''x''{{i sup|''j''}}}}) is simply a weighted average of the {{mvar|j}}-th moments of the components. Moments about the mean {{math|1=''H''(''x'') = (''x − μ''){{i sup|''j''}}}} involve a binomial expansion:<ref>{{harvtxt|Frühwirth-Schnatter|2006|at=Ch.1.2.4}}</ref> <math display="block">\begin{align} \operatorname{E}\left[{\left(X - \mu\right)}^j\right] & = \sum_{i=1}^n w_i \operatorname{E}\left[{\left(X_i - \mu_i + \mu_i - \mu\right)}^j\right] \\ & = \sum_{i=1}^n w_i \sum_{k=0}^j \binom{j}{k} {\left(\mu_i - \mu\right)}^{j-k} \operatorname{E}\left[{\left(X_i - \mu_i\right)}^k\right], \end{align}</math> where {{math|''μ<sub>i</sub>''}} denotes the mean of the {{mvar|i}}-th component. In the case of a mixture of one-dimensional distributions with weights {{math|''w<sub>i</sub>''}}, means {{math|''μ<sub>i</sub>''}} and variances {{math|''σ''<sub>''i''</sub><sup>2</sup>}}, the total mean and variance will be: <math display="block"> \operatorname{E}[X] = \mu = \sum_{i = 1}^n w_i \mu_i ,</math><math display="block"> \begin{align} \operatorname{E}\left[(X - \mu)^2\right] & = \sigma^2 \\ & = \operatorname{E}[X^2] - \mu^2 & (\text{standard variance reformulation})\\ & = \left(\sum_{i=1}^n w_i \operatorname{E}\left[X_i^2\right]\right) - \mu^{2} \\ & = \sum_{i=1}^n w_i(\sigma_i^2 + \mu_i^2)- \mu^2 & ( \sigma_i^2 = \operatorname{E}[X_i^2] - \mu_i^2 \implies \operatorname{E}[X_i^2] = \sigma_i^2 + \mu_i^2) \end{align}</math> These relations highlight the potential of mixture distributions to display non-trivial higher-order moments such as [[skewness]] and [[kurtosis]] ([[fat tail]]s) and multi-modality, even in the absence of such features within the components themselves. Marron and Wand (1992) give an illustrative account of the flexibility of this framework.<ref name="Marron92">{{Cite journal|title=Exact Mean Integrated Squared Error |first1=J. S. |last1=Marron |first2=M. P. | last2=Wand | journal=[[The Annals of Statistics]]|volume=20 |year=1992| pages=712–736 |issue=2 | doi=10.1214/aos/1176348653|doi-access=free }}, http://projecteuclid.org/euclid.aos/1176348653</ref> ===Modes=== The question of [[Multimodal distribution|multimodality]] is simple for some cases, such as mixtures of [[exponential distribution]]s: all such mixtures are [[Unimodality|unimodal]].<ref>Frühwirth-Schnatter (2006, Ch.1)</ref> However, for the case of mixtures of [[normal distribution]]s, it is a complex one. Conditions for the number of modes in a multivariate normal mixture are explored by Ray & Lindsay<ref name="RayLindsay">{{citation | title = The topography of multivariate normal mixtures | last1 = Ray | first1 = R. | last2 = Lindsay |first2= B. | year = 2005 | journal = The Annals of Statistics | volume = 33 | number = 5 | pages = 2042–2065 | doi = 10.1214/009053605000000417 | arxiv = math/0602238}}</ref> extending earlier work on univariate<ref name=Robertson1969>Robertson CA, Fryer JG (1969) Some descriptive properties of normal mixtures. Skand Aktuarietidskr 137–146</ref><ref name=Behboodian1970>{{cite journal | last1 = Behboodian | first1 = J | year = 1970 | title = On the modes of a mixture of two normal distributions | journal = Technometrics | volume = 12 | pages = 131–139 | doi=10.2307/1267357| jstor = 1267357 }}</ref> and multivariate<ref>{{cite book | last1 = Carreira-Perpiñán | first1 = M Á | last2 = Williams | first2 = C | year = 2003 | title = On the modes of a Gaussian mixture | series = Published as: Lecture Notes in Computer Science 2695 | publisher = [[Springer-Verlag]] | pages = 625–640 | doi=10.1007/3-540-44935-3_44 | issn = 0302-9743 | url = http://faculty2.ucmerced.edu/mcarreira-perpinan/papers/EDI-INF-RR-0159.pdf}}</ref> distributions. Here the problem of evaluation of the modes of an {{mvar|n}} component mixture in a {{mvar|D}} dimensional space is reduced to identification of critical points (local minima, maxima and [[saddle point]]s) on a [[manifold]] referred to as the [[Ridge (differential geometry)|ridgeline surface]], which is the image of the ridgeline function <math display="block"> x^{*}(\alpha) = \left[ \sum_{i=1}^{n} \alpha_i \Sigma_i^{-1} \right]^{-1} \times \left[ \sum_{i=1}^{n} \alpha_i \Sigma_i^{-1} \mu_i \right], </math> where <math>\alpha</math> belongs to the <math>(n-1)</math>-dimensional standard [[simplex]]: <math display="block"> \mathcal{S}_n = \left\{ \alpha \in \mathbb{R}^n: \alpha_i \in [0,1], \sum_{i=1}^n \alpha_i = 1 \right\} </math> and <math>\Sigma_i \in \Reals^{D\times D},\, \mu_i \in \Reals^D</math> correspond to the covariance and mean of the {{mvar|i}}-th component. Ray & Lindsay<ref name="RayLindsay" /> consider the case in which <math>n-1 < D</math> showing a one-to-one correspondence of modes of the mixture and those on the '''ridge elevation function''' <math>h(\alpha) = q(x^*(\alpha))</math> thus one may identify the modes by solving <math> \frac{d h(\alpha)}{d \alpha} = 0 </math> with respect to <math>\alpha</math> and determining the value <math>x^*(\alpha)</math>. Using graphical tools, the potential multi-modality of mixtures with number of components <math>n \in \{2,3\}</math> is demonstrated; in particular it is shown that the number of modes may exceed <math>n</math> and that the modes may not be coincident with the component means. For two components they develop a graphical tool for analysis by instead solving the aforementioned differential with respect to the first mixing weight <math>w_1</math> (which also determines the second mixing weight through <math>w_2 = 1-w_1</math>) and expressing the solutions as a function <math>\Pi(\alpha), \,\alpha \in [0,1]</math> so that the number and location of modes for a given value of <math>w_1</math> corresponds to the number of intersections of the graph on the line <math>\Pi(\alpha) = w_1</math>. This in turn can be related to the number of oscillations of the graph and therefore to solutions of <math> \frac{d \Pi(\alpha)}{d \alpha} = 0 </math> leading to an explicit solution for the case of a two component mixture with <math>\Sigma_1 = \Sigma_2 = \Sigma </math> (sometimes called a [[homoscedastic]] mixture) given by <math display="block"> 1 - \alpha(1-\alpha) d_M(\mu_1, \mu_2, \Sigma)^2 </math> where <math display="inline"> d_M(\mu_1,\mu_2,\Sigma) = \sqrt{(\mu_2-\mu_1)^\mathsf{T} \Sigma^{-1} (\mu_2-\mu_1)} </math> is the [[Mahalanobis distance]] between <math>\mu_1</math> and <math>\mu_2</math>. Since the above is quadratic it follows that in this instance there are at most two modes irrespective of the dimension or the weights. For normal mixtures with general <math>n>2</math> and <math>D>1</math>, a lower bound for the maximum number of possible modes, and{{snd}}conditionally on the assumption that the maximum number is finite{{snd}}an upper bound are known. For those combinations of <math>n</math> and <math>D</math> for which the maximum number is known, it matches the lower bound.<ref>{{citation | title = Maximum number of modes of Gaussian mixtures | last1 = Améndola | first1 = C. | last2 = Engström | first2 = A. | last3 = Haase | first3 = C. | year = 2020 | journal = Information and Inference: A Journal of the IMA | volume = 9 | number = 3 | pages = 587–600 | doi = 10.1093/imaiai/iaz013 | arxiv = 1702.05066}}</ref> ==Examples== ===Two normal distributions=== Simple examples can be given by a mixture of two normal distributions. (See [[Multimodal distribution#Mixture of two normal distributions]] for more details.) Given an equal (50/50) mixture of two normal distributions with the same standard deviation and different means ([[homoscedastic]]), the overall distribution will exhibit low [[kurtosis]] relative to a single normal distribution – the means of the subpopulations fall on the shoulders of the overall distribution. If sufficiently separated, namely by twice the (common) standard deviation, so <math>\left|\mu_1 - \mu_2\right| > 2\sigma,</math> these form a [[bimodal distribution]], otherwise it simply has a wide peak.<ref name="Schilling2002">{{Cite journal|title=Is human height bimodal?|first1=Mark F. |last1=Schilling |first2= Ann E.| last2=Watkins|author2-link=Ann E. Watkins |first3=William |last3=Watkins| journal=[[The American Statistician]]| doi=10.1198/00031300265 |volume=56 |year=2002| pages=223–229 |issue=3}}</ref> The variation of the overall population will also be greater than the variation of the two subpopulations (due to spread from different means), and thus exhibits [[overdispersion]] relative to a normal distribution with fixed variation {{mvar|σ}}, though it will not be overdispersed relative to a normal distribution with variation equal to variation of the overall population. Alternatively, given two subpopulations with the same mean and different standard deviations, the overall population will exhibit high kurtosis, with a sharper peak and heavier tails (and correspondingly shallower shoulders) than a single distribution. <gallery> File:Bimodal.png|Univariate mixture distribution, showing bimodal distribution File:Bimodal-bivariate-small.png|Multivariate mixture distribution, showing four modes </gallery> ===A normal and a Cauchy distribution=== The following example is adapted from Hampel,<ref>{{citation| last= Hampel | first= Frank | title= Is statistics too difficult? | journal= Canadian Journal of Statistics | year= 1998 | volume= 26 | pages= 497–513 | doi= 10.2307/3315772| hdl= 20.500.11850/145503 | hdl-access= free }}</ref> who credits [[John Tukey]]. Consider the mixture distribution defined by {{block indent | em = 1.6 | text = {{math|1=''F''(''x'') = (1 − 10<sup>−10</sup>) ([[Normal distribution|standard normal]]) + 10<sup>−10</sup> ([[Cauchy distribution|standard Cauchy]])}}.}} The mean of [[i.i.d.]] observations from {{math|''F''(''x'')}} behaves "normally" except for exorbitantly large samples, although the mean of {{math|''F''(''x'')}} does not even exist. == Applications == {{Further|Mixture model}} Mixture densities are complicated densities expressible in terms of simpler densities (the mixture components), and are used both because they provide a good model for certain data sets (where different subsets of the data exhibit different characteristics and can best be modeled separately), and because they can be more mathematically tractable, because the individual mixture components can be more easily studied than the overall mixture density. Mixture densities can be used to model a [[statistical population]] with [[subpopulation]]s, where the mixture components are the densities on the subpopulations, and the weights are the proportions of each subpopulation in the overall population. Mixture densities can also be used to model [[experimental error]] or contamination – one assumes that most of the samples measure the desired phenomenon, with some samples from a different, erroneous distribution. Parametric statistics that assume no error often fail on such mixture densities – for example, statistics that assume normality often fail disastrously in the presence of even a few [[outliers]] – and instead one uses [[robust statistics]]. In [[meta-analysis]] of separate studies, [[study heterogeneity]] causes distribution of results to be a mixture distribution, and leads to [[overdispersion]] of results relative to predicted error. For example, in a [[statistical survey]], the [[margin of error]] (determined by sample size) predicts the [[sampling error]] and hence dispersion of results on repeated surveys. The presence of study heterogeneity (studies have different [[sampling bias]]) increases the dispersion relative to the margin of error. == See also == * [[Compound distribution]] * [[Contaminated normal distribution]] * [[Convex combination]] * [[Giry monad]] * [[Expectation-maximization algorithm|Expectation-maximization (EM) algorithm]] * Not to be confused with: [[list of convolutions of probability distributions]] * [[Product distribution]] === Mixture === * [[Mixture (probability)]] * [[Mixture model]] === Hierarchical models === * [[Graphical model]] * [[Hierarchical Bayes model]] ==Notes== {{Reflist}} == References == {{refbegin}} *{{citation |title=Finite Mixture and Markov Switching Models| last1=Frühwirth-Schnatter |first1=Sylvia |date=2006 |publisher=Springer |isbn=978-1-4419-2194-9 }} *{{citation | title=Mixture models: theory, geometry and applications | last1=Lindsay | first1=Bruce G. | date=1995 | series=NSF-CBMS Regional Conference Series in Probability and Statistics | volume=5 | publisher=Institute of Mathematical Statistics | location=Hayward, CA, USA | isbn=0-940600-32-3 | jstor=4153184}} * {{citation | title=Mixture models | last1=Seidel | first1=Wilfried | date=2010 | editor-last=Lovric | editor-first=M. | encyclopedia=International Encyclopedia of Statistical Science | publisher=Springer | location=Heidelberg | pages=827–829 | doi=10.1007/978-3-642-04898-2 | isbn=978-3-642-04898-2| arxiv=0909.0389 }} *{{citation | last1 = Yao | first1 = Weixin | last2 = Xiang | first2 = Sijia | isbn = 978-0367481827 | publisher = Chapman & Hall/CRC Press | location = Boca Raton, FL | title = Mixture Models: Parametric, Semiparametric, and New Directions | url=https://www.routledge.com/Mixture-Models-Parametric-Semiparametric-and-New-Directions/Yao-Xiang/p/book/9780367481827 | year = 2024}} {{refend}} {{Authority control}} {{DEFAULTSORT:Mixture Density}} [[Category:Systems of probability distributions]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Authority control
(
edit
)
Template:Block indent
(
edit
)
Template:Citation
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Further
(
edit
)
Template:Harvtxt
(
edit
)
Template:Main article
(
edit
)
Template:Math
(
edit
)
Template:Mvar
(
edit
)
Template:Refbegin
(
edit
)
Template:Refend
(
edit
)
Template:Reflist
(
edit
)
Template:See also
(
edit
)
Template:Short description
(
edit
)
Template:Snd
(
edit
)