Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Covariance matrix
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Inverse of the covariance matrix=== The [[invertible matrix|inverse of this matrix]], <math>\operatorname{K}_{\mathbf{X}\mathbf{X}}^{-1}</math>, if it exists, is the inverse covariance matrix (or inverse concentration matrix{{dubious|reason=An inverse concentration gets smaller the more concentrated something is, not larger as it reasonably should be. Besides, we already call it "concentration matrix" later on in this sentence, which is kind of contradictory.|date=September 2024}}), also known as the ''[[precision matrix]]'' (or ''concentration matrix'').<ref>{{cite book |title=All of Statistics: A Concise Course in Statistical Inference |url=https://archive.org/details/springer_10.1007-978-0-387-21736-9 |first=Larry |last=Wasserman |year=2004 |publisher=Springer |isbn=0-387-40272-1}}</ref> Just as the covariance matrix can be written as the rescaling of a correlation matrix by the marginal variances: <math display="block">\operatorname{cov}(\mathbf{X}) = \begin{bmatrix} \sigma_{x_1} & & & 0\\ & \sigma_{x_2}\\ & & \ddots\\ 0 & & & \sigma_{x_n} \end{bmatrix} \begin{bmatrix} 1 & \rho_{x_1, x_2} & \cdots & \rho_{x_1, x_n}\\ \rho_{x_2, x_1} & 1 & \cdots & \rho_{x_2, x_n}\\ \vdots & \vdots & \ddots & \vdots\\ \rho_{x_n, x_1} & \rho_{x_n, x_2} & \cdots & 1\\ \end{bmatrix} \begin{bmatrix} \sigma_{x_1} & & & 0\\ & \sigma_{x_2}\\ & & \ddots\\ 0 & & & \sigma_{x_n} \end{bmatrix}</math> So, using the idea of [[partial correlation]], and partial variance, the inverse covariance matrix can be expressed analogously: <math display="block">\operatorname{cov}(\mathbf{X})^{-1} = \begin{bmatrix} \frac{1}{\sigma_{x_1|x_2...}} & & & 0\\ & \frac{1}{\sigma_{x_2|x_1,x_3...}}\\ & & \ddots\\ 0 & & & \frac{1}{\sigma_{x_n|x_1...x_{n-1}}} \end{bmatrix} \begin{bmatrix} 1 & -\rho_{x_1, x_2\mid x_3...} & \cdots & -\rho_{x_1, x_n\mid x_2...x_{n-1}}\\ -\rho_{x_2, x_1\mid x_3...} & 1 & \cdots & -\rho_{x_2, x_n\mid x_1,x_3...x_{n-1}}\\ \vdots & \vdots & \ddots & \vdots\\ -\rho_{x_n, x_1\mid x_2...x_{n-1}} & -\rho_{x_n, x_2\mid x_1,x_3...x_{n-1}} & \cdots & 1\\ \end{bmatrix} \begin{bmatrix} \frac{1}{\sigma_{x_1|x_2...}} & & & 0\\ & \frac{1}{\sigma_{x_2|x_1,x_3...}}\\ & & \ddots\\ 0 & & & \frac{1}{\sigma_{x_n|x_1...x_{n-1}}} \end{bmatrix}</math> This duality motivates a number of other dualities between marginalizing and conditioning for Gaussian random variables.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)