Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Mixture model
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
====Multivariate Gaussian mixture model==== A Bayesian Gaussian mixture model is commonly extended to fit a vector of unknown parameters (denoted in bold), or multivariate normal distributions. In a multivariate distribution (i.e. one modelling a vector <math>\boldsymbol{x}</math> with ''N'' random variables) one may model a vector of parameters (such as several observations of a signal or patches within an image) using a Gaussian mixture model prior distribution on the vector of estimates given by <math display="block"> p(\boldsymbol{\theta}) = \sum_{i=1}^K \phi_i \mathcal{N}(\boldsymbol{\mu}_i,\boldsymbol{\Sigma}_i) </math> where the ''i<sup>th</sup>'' vector component is characterized by normal distributions with weights <math>\phi_i</math>, means <math>\boldsymbol{\mu}_i</math> and covariance matrices <math>\boldsymbol{\Sigma}_i</math>. To incorporate this prior into a Bayesian estimation, the prior is multiplied with the known distribution <math>p(\boldsymbol{x | \theta})</math> of the data <math>\boldsymbol{x}</math> conditioned on the parameters <math>\boldsymbol{\theta}</math> to be estimated. With this formulation, the [[Posterior probability|posterior distribution]] <math>p(\boldsymbol{\theta | x})</math> is ''also'' a Gaussian mixture model of the form <math display="block"> p(\boldsymbol{\theta | x}) = \sum_{i=1}^K \tilde{\phi}_i \mathcal{N}(\boldsymbol{\tilde{\mu}_i}, \boldsymbol{\tilde{\Sigma}}_i) </math> with new parameters <math>\tilde{\phi}_i, \boldsymbol{\tilde{\mu}}_i</math> and <math>\boldsymbol{\tilde{\Sigma}}_i</math> that are updated using the [[Expectation-maximization algorithm|EM algorithm]]. <ref> {{cite journal |last=Yu |first=Guoshen |title=Solving Inverse Problems with Piecewise Linear Estimators: From Gaussian Mixture Models to Structured Sparsity |journal=IEEE Transactions on Image Processing |volume=21 | date=2012|pages=2481β2499 |issue=5 |doi=10.1109/tip.2011.2176743 |pmid=22180506 | bibcode = 2012ITIP...21.2481G | arxiv = 1006.3056 | s2cid=479845 }} </ref> Although EM-based parameter updates are well-established, providing the initial estimates for these parameters is currently an area of active research. Note that this formulation yields a closed-form solution to the complete posterior distribution. Estimations of the random variable <math>\boldsymbol{\theta}</math> may be obtained via one of several estimators, such as the mean or maximum of the posterior distribution. Such distributions are useful for assuming patch-wise shapes of images and clusters, for example. In the case of image representation, each Gaussian may be tilted, expanded, and warped according to the covariance matrices <math>\boldsymbol{\Sigma}_i</math>. One Gaussian distribution of the set is fit to each patch (usually of size 8Γ8 pixels) in the image. Notably, any distribution of points around a cluster (see [[K-means clustering|''k''-means]]) may be accurately given enough Gaussian components, but scarcely over ''K''=20 components are needed to accurately model a given image distribution or cluster of data.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)