Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Covariance matrix
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Definition== Throughout this article, boldfaced unsubscripted <math>\mathbf{X}</math> and <math>\mathbf{Y}</math> are used to refer to random vectors, and Roman subscripted <math>X_i</math> and <math>Y_i</math> are used to refer to scalar random variables. If the entries in the [[column vector]] <math display="block">\mathbf{X} = (X_1, X_2, \dots, X_n)^\mathsf{T}</math> are [[random variable]]s, each with finite [[variance]] and [[expected value]], then the covariance matrix <math>\operatorname{K}_{\mathbf{X}\mathbf{X}}</math> is the matrix whose <math>(i,j)</math> entry is the [[covariance]]<ref name=KunIlPark>{{cite book | last=Park |first = Kun Il| title=Fundamentals of Probability and Stochastic Processes with Applications to Communications| publisher=Springer | year=2018 | isbn=978-3-319-68074-3}}</ref>{{rp|p=177}} <math display="block">\operatorname{K}_{X_i X_j} = \operatorname{cov}[X_i, X_j] = \operatorname{E}[(X_i - \operatorname{E}[X_i])(X_j - \operatorname{E}[X_j])]</math> where the operator <math>\operatorname{E}</math> denotes the expected value (mean) of its argument. ===Conflicting nomenclatures and notations=== Nomenclatures differ. Some statisticians, following the probabilist [[William Feller]] in his two-volume book ''An Introduction to Probability Theory and Its Applications'',<ref name="Feller1971">{{cite book|author=William Feller|title=An introduction to probability theory and its applications|url=https://books.google.com/books?id=K7kdAQAAMAAJ|access-date=10 August 2012|year=1971|publisher=Wiley|isbn=978-0-471-25709-7}}</ref> call the matrix <math>\operatorname{K}_{\mathbf{X}\mathbf{X}}</math> the '''variance''' of the random vector <math>\mathbf{X}</math>, because it is the natural generalization to higher dimensions of the 1-dimensional variance. Others call it the '''covariance matrix''', because it is the matrix of covariances between the scalar components of the vector <math>\mathbf{X}</math>. <math display="block"> \operatorname{var}(\mathbf{X}) = \operatorname{cov}(\mathbf{X},\mathbf{X}) = \operatorname{E} \left[ (\mathbf{X} - \operatorname{E} [\mathbf{X}]) (\mathbf{X} - \operatorname{E} [\mathbf{X}])^\mathsf{T} \right]. </math> Both forms are quite standard, and there is no ambiguity between them. The matrix <math>\operatorname{K}_{\mathbf{X}\mathbf{X}}</math> is also often called the '''variance-covariance matrix''', since the diagonal terms are in fact variances. By comparison, the notation for the [[cross-covariance matrix]] ''between'' two vectors is <math display="block"> \operatorname{cov}(\mathbf{X},\mathbf{Y})
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)