Matrix normal distribution
Template:Short description Template:Probability distribution</math> | cdf = | mean =<math>\mathbf{M}</math> | median = | mode = | variance =<math>\mathbf{U}</math> (among-row) and <math>\mathbf{V}</math> (among-column) | skewness = | kurtosis = | entropy = | mgf = | char = }}
In statistics, the matrix normal distribution or matrix Gaussian distribution is a probability distribution that is a generalization of the multivariate normal distribution to matrix-valued random variables.
DefinitionEdit
The probability density function for the random matrix X (n × p) that follows the matrix normal distribution <math>\mathcal{MN}_{n,p}(\mathbf{M}, \mathbf{U}, \mathbf{V})</math> has the form:
- <math>
p(\mathbf{X}\mid\mathbf{M}, \mathbf{U}, \mathbf{V}) = \frac{\exp\left( -\frac{1}{2} \, \mathrm{tr}\left[ \mathbf{V}^{-1} (\mathbf{X} - \mathbf{M})^{T} \mathbf{U}^{-1} (\mathbf{X} - \mathbf{M}) \right] \right)}{(2\pi)^{np/2} |\mathbf{V}|^{n/2} |\mathbf{U}|^{p/2}} </math>
where <math>\mathrm{tr}</math> denotes trace and M is n × p, U is n × n and V is p × p, and the density is understood as the probability density function with respect to the standard Lebesgue measure in <math>\mathbb{R}^{n\times p}</math>, i.e.: the measure corresponding to integration with respect to <math>dx_{11} dx_{21}\dots dx_{n1} dx_{12}\dots dx_{n2}\dots dx_{np}</math>.
The matrix normal is related to the multivariate normal distribution in the following way:
- <math>\mathbf{X} \sim \mathcal{MN}_{n\times p}(\mathbf{M}, \mathbf{U}, \mathbf{V}),</math>
if and only if
- <math>\mathrm{vec}(\mathbf{X}) \sim \mathcal{N}_{np}(\mathrm{vec}(\mathbf{M}), \mathbf{V} \otimes \mathbf{U})</math>
where <math>\otimes</math> denotes the Kronecker product and <math>\mathrm{vec}(\mathbf{M})</math> denotes the vectorization of <math>\mathbf{M}</math>.
ProofEdit
The equivalence between the above matrix normal and multivariate normal density functions can be shown using several properties of the trace and Kronecker product, as follows. We start with the argument of the exponent of the matrix normal PDF:
- <math>\begin{align}
&\;\;\;\;-\frac12\text{tr}\left[ \mathbf{V}^{-1} (\mathbf{X} - \mathbf{M})^{T} \mathbf{U}^{-1} (\mathbf{X} - \mathbf{M}) \right]\\
&= -\frac12\text{vec}\left(\mathbf{X} - \mathbf{M}\right)^T \text{vec}\left(\mathbf{U}^{-1} (\mathbf{X} - \mathbf{M}) \mathbf{V}^{-1}\right) \\
&= -\frac12\text{vec}\left(\mathbf{X} - \mathbf{M}\right)^T \left(\mathbf{V}^{-1}\otimes\mathbf{U}^{-1}\right)\text{vec}\left(\mathbf{X} - \mathbf{M}\right) \\
&= -\frac12\left[\text{vec}(\mathbf{X}) - \text{vec}(\mathbf{M})\right]^T \left(\mathbf{V}\otimes\mathbf{U}\right)^{-1}\left[\text{vec}(\mathbf{X}) - \text{vec}(\mathbf{M})\right] \end{align}</math> which is the argument of the exponent of the multivariate normal PDF with respect to Lebesgue measure in <math>\mathbb{R}^{n p}</math>. The proof is completed by using the determinant property: <math> |\mathbf{V}\otimes \mathbf{U}| = |\mathbf{V}|^n |\mathbf{U}|^p.</math>
PropertiesEdit
If <math>\mathbf{X} \sim \mathcal{MN}_{n\times p}(\mathbf{M}, \mathbf{U}, \mathbf{V})</math>, then we have the following properties:<ref name="GuptaNagar1999">Template:Cite book</ref><ref>Template:Cite journal</ref>
Expected valuesEdit
The mean, or expected value is:
- <math>E[\mathbf{X}] = \mathbf{M}</math>
and we have the following second-order expectations:
- <math>E[(\mathbf{X} - \mathbf{M})(\mathbf{X} - \mathbf{M})^{T}]
= \mathbf{U}\operatorname{tr}(\mathbf{V}) </math>
- <math>E[(\mathbf{X} - \mathbf{M})^{T} (\mathbf{X} - \mathbf{M})]
= \mathbf{V}\operatorname{tr}(\mathbf{U}) </math> where <math>\operatorname{tr}</math> denotes trace.
More generally, for appropriately dimensioned matrices A,B,C:
- <math>\begin{align}
E[\mathbf{X}\mathbf{A}\mathbf{X}^{T}] &= \mathbf{U}\operatorname{tr}(\mathbf{A}^T\mathbf{V}) + \mathbf{MAM}^T\\
E[\mathbf{X}^T\mathbf{B}\mathbf{X}] &= \mathbf{V}\operatorname{tr}(\mathbf{U}\mathbf{B}^T) + \mathbf{M}^T\mathbf{BM}\\
E[\mathbf{X}\mathbf{C}\mathbf{X}] &= \mathbf{V}\mathbf{C}^T\mathbf{U} + \mathbf{MCM} \end{align}</math>
TransformationEdit
Transpose transform:
- <math>\mathbf{X}^T \sim \mathcal{MN}_{p\times n}(\mathbf{M}^T, \mathbf{V}, \mathbf{U})
</math>
Linear transform: let D (r-by-n), be of full rank r ≤ n and C (p-by-s), be of full rank s ≤ p, then:
- <math>\mathbf{DXC}\sim \mathcal{MN}_{r\times s}(\mathbf{DMC}, \mathbf{DUD}^T, \mathbf{C}^T\mathbf{VC})
</math>
CompositionEdit
The product of two matrix normal distributions
- <math> \mathcal{MN}(\mathbf{M_1}, \mathbf{U_1}, \mathbf{V_1})\cdot \mathcal{MN}(\mathbf{M_2}, \mathbf{U_2}, \mathbf{V_2}) \propto \mathcal{N}(\mu_c, \Sigma_c)
</math> is proportional to a normal distribution with parameters:
- <math> \Sigma_c = (V_1^{-1} \otimes U_1^{-1} + V_2^{-1} \otimes U_2^{-1})^{-1},
</math>
- <math> \mu_c = \Sigma_c \big((V_1^{-1} \otimes U_1^{-1}) \operatorname{vec}(M_1) + (V_2^{-1} \otimes U_2^{-1})\operatorname{vec}(M_2)\big).
</math>
ExampleEdit
Let's imagine a sample of n independent p-dimensional random variables identically distributed according to a multivariate normal distribution:
- <math>\mathbf{Y}_i \sim \mathcal{N}_p({\boldsymbol \mu}, {\boldsymbol \Sigma}) \text{ with } i \in \{1,\ldots,n\}</math>.
When defining the n × p matrix <math>\mathbf{X}</math> for which the ith row is <math>\mathbf{Y}_i</math>, we obtain:
- <math>\mathbf{X} \sim \mathcal{MN}_{n \times p}(\mathbf{M}, \mathbf{U}, \mathbf{V})</math>
where each row of <math>\mathbf{M}</math> is equal to <math>{\boldsymbol \mu}</math>, that is <math>\mathbf{M}=\mathbf{1}_n \times {\boldsymbol \mu}^T</math>, <math>\mathbf{U}</math> is the n × n identity matrix, that is the rows are independent, and <math>\mathbf{V} = {\boldsymbol \Sigma}</math>.
Maximum likelihood parameter estimationEdit
Given k matrices, each of size n × p, denoted <math>\mathbf{X}_1, \mathbf{X}_2, \ldots, \mathbf{X}_k</math>, which we assume have been sampled i.i.d. from a matrix normal distribution, the maximum likelihood estimate of the parameters can be obtained by maximizing:
- <math>
\prod_{i=1}^k \mathcal{MN}_{n\times p}(\mathbf{X}_i\mid\mathbf{M},\mathbf{U},\mathbf{V}). </math> The solution for the mean has a closed form, namely
- <math>
\mathbf{M} = \frac{1}{k} \sum_{i=1}^k\mathbf{X}_i </math> but the covariance parameters do not. However, these parameters can be iteratively maximized by zero-ing their gradients at:
- <math>
\mathbf{U} = \frac{1}{kp} \sum_{i=1}^k(\mathbf{X}_i-\mathbf{M})\mathbf{V}^{-1}(\mathbf{X}_i-\mathbf{M})^T </math> and
- <math>
\mathbf{V} = \frac{1}{kn} \sum_{i=1}^k(\mathbf{X}_i-\mathbf{M})^T\mathbf{U}^{-1}(\mathbf{X}_i-\mathbf{M}), </math> See for example <ref>Template:Cite arXiv</ref> and references therein. The covariance parameters are non-identifiable in the sense that for any scale factor, s>0, we have:
- <math>
\mathcal{MN}_{n\times p}(\mathbf{X}\mid\mathbf{M},\mathbf{U},\mathbf{V}) = \mathcal{MN}_{n\times p}(\mathbf{X}\mid\mathbf{M},s\mathbf{U},\tfrac{1}{s}\mathbf{V}) . </math>
Drawing values from the distributionEdit
Sampling from the matrix normal distribution is a special case of the sampling procedure for the multivariate normal distribution. Let <math>\mathbf{X}</math> be an n by p matrix of np independent samples from the standard normal distribution, so that
- <math>
\mathbf{X}\sim\mathcal{MN}_{n\times p}(\mathbf{0},\mathbf{I},\mathbf{I}). </math> Then let
- <math>
\mathbf{Y}=\mathbf{M}+\mathbf{A}\mathbf{X}\mathbf{B}, </math> so that
- <math>
\mathbf{Y}\sim\mathcal{MN}_{n\times p}(\mathbf{M},\mathbf{AA}^T,\mathbf{B}^T\mathbf{B}), </math> where A and B can be chosen by Cholesky decomposition or a similar matrix square root operation.
Relation to other distributionsEdit
Dawid (1981) provides a discussion of the relation of the matrix-valued normal distribution to other distributions, including the Wishart distribution, inverse-Wishart distribution and matrix t-distribution, but uses different notation from that employed here.