Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Multivariate random variable
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Random variable with multiple component dimensions}}{{Probability fundamentals}} {{broader|Multivariate statistics}} In [[probability theory|probability]], and [[statistics]], a '''multivariate random variable''' or '''random vector''' is a list or [[vector (mathematics)|vector]] of mathematical [[Variable (mathematics)|variable]]s each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge of its value. The individual variables in a random vector are grouped together because they are all part of a single mathematical system — often they represent different properties of an individual [[statistical unit]]. For example, while a given person has a specific age, height and weight, the representation of these features of ''an unspecified person'' from within a group would be a random vector. Normally each element of a random vector is a [[real number]]. Random vectors are often used as the underlying implementation of various types of aggregate [[random variable]]s, e.g. a [[random matrix]], [[random tree]], [[random sequence]], [[stochastic process]], etc. Formally, a multivariate random variable is a [[column vector]] <math> \mathbf{X} = (X_1,\dots,X_n)^\mathsf{T} </math> (or its [[transpose]], which is a [[row vector]]) whose components are [[random variable]]s on the [[probability space]] <math>(\Omega, \mathcal{F}, P)</math>, where <math>\Omega</math> is the [[sample space]], <math>\mathcal{F}</math> is the [[sigma-algebra]] (the collection of all events), and <math>P</math> is the [[probability measure]] (a function returning each event's [[probability]]). ==Probability distribution== {{main|Multivariate probability distribution}} Every random vector gives rise to a probability measure on <math>\mathbb{R}^n</math> with the [[Borel algebra]] as the underlying sigma-algebra. This measure is also known as the [[joint probability distribution]], the joint distribution, or the multivariate distribution of the random vector. The [[Probability distribution|distributions]] of each of the component random variables <math>X_i</math> are called [[marginal distribution]]s. The [[conditional probability distribution]] of <math>X_i</math> given <math>X_j</math> is the probability distribution of <math>X_i</math> when <math>X_j</math> is known to be a particular value. The '''cumulative distribution function''' <math>F_{\mathbf{X}} : \R^n \mapsto [0,1]</math> of a random vector <math>\mathbf{X}=(X_1,\dots,X_n)^\mathsf{T} </math> is defined as<ref name=Gallager>{{cite book |first=Robert G. |last=Gallager|year=2013 |title=Stochastic Processes Theory for Applications |publisher=Cambridge University Press |isbn=978-1-107-03975-9}}</ref>{{rp|p.15}} {{Equation box 1 |indent = |title= |equation = {{NumBlk||<math>F_{\mathbf{X}}(\mathbf{x}) = \operatorname{P}(X_1 \leq x_1,\ldots,X_n \leq x_n)</math>|{{EquationRef|Eq.1}}}} |cellpadding= 6 |border |border colour = #0073CF |background colour=#F5FFFA}} where <math>\mathbf{x} = (x_1, \dots, x_n)^\mathsf{T}</math>. ==Operations on random vectors== Random vectors can be subjected to the same kinds of [[Euclidean vector#Basic properties|algebraic operations]] as can non-random vectors: addition, subtraction, multiplication by a [[Scalar (mathematics)|scalar]], and the taking of [[Dot product|inner products]]. ===Affine transformations=== Similarly, a new random vector <math>\mathbf{Y}</math> can be defined by applying an [[affine transformation]] <math>g\colon \mathbb{R}^n \to \mathbb{R}^n</math> to a random vector <math>\mathbf{X}</math>: :<math>\mathbf{Y}=\mathbf{A}\mathbf{X}+b</math>, where <math>\mathbf{A}</math> is an <math>n \times n</math> matrix and <math>b</math> is an <math>n \times 1</math> column vector. If <math>\mathbf{A}</math> is an invertible matrix and <math>\textstyle\mathbf{X}</math> has a probability density function <math>f_{\mathbf{X}}</math>, then the probability density of <math>\mathbf{Y}</math> is :<math>f_{\mathbf{Y}}(y)=\frac{f_{\mathbf{X}}(\mathbf{A}^{-1}(y-b))}{|\det\mathbf{A}|}</math>. ===Invertible mappings=== More generally we can study invertible mappings of random vectors.<ref name=Lapidoth>{{cite book |last=Taboga |first=Marco |title=Lectures on Probability Theory and Mathematical Statistics |publisher= CreateSpace Independent Publishing Platform |year=2017 |isbn=978-1981369195 }}</ref>{{rp|p.284–285}} Let <math>g</math> be a one-to-one mapping from an open subset <math>\mathcal{D}</math> of <math>\mathbb{R}^n</math> onto a subset <math>\mathcal{R}</math> of <math>\mathbb{R}^n</math>, let <math>g</math> have continuous partial derivatives in <math>\mathcal{D}</math> and let the [[Jacobian matrix and determinant|Jacobian determinant]] <math>\det\left (\frac{\partial \mathbf{y}}{\partial \mathbf{x}}\right )</math> of <math>g</math> be zero at no point of <math>\mathcal{D}</math>. Assume that the real random vector <math>\mathbf{X}</math> has a probability density function <math>f_{\mathbf{X}}(\mathbf{x})</math> and satisfies <math> P(\mathbf{X} \in \mathcal{D}) = 1</math>. Then the random vector <math>\mathbf{Y}=g(\mathbf{X})</math> is of probability density :<math>\left. f_{\mathbf{Y}}(\mathbf{y})=\frac{f_{\mathbf{X}}(\mathbf{x})}{\left |\det\left (\frac{\partial \mathbf{y}}{\partial \mathbf{x}}\right )\right |} \right |_{\mathbf{x}=g^{-1}(\mathbf{y})} \mathbf{1}(\mathbf{y} \in R_\mathbf{Y})</math> where <math>\mathbf{1}</math> denotes the [[indicator function]] and set <math>R_\mathbf{Y} = \{ \mathbf{y} = g(\mathbf{x}): f_{\mathbf{X}}(\mathbf{x}) > 0 \} \subseteq \mathcal{R} </math> denotes support of <math>\mathbf{Y}</math>. ==Expected value== The [[expected value]] or mean of a random vector <math>\mathbf{X}</math> is a fixed vector <math>\operatorname{E}[\mathbf{X}]</math> whose elements are the expected values of the respective random variables.<ref name=Gubner>{{cite book |first=John A. |last=Gubner |year=2006 |title=Probability and Random Processes for Electrical and Computer Engineers |publisher=Cambridge University Press |isbn=978-0-521-86470-1}}</ref>{{rp|p.333}} {{Equation box 1 |indent = |title= |equation = {{NumBlk||<math>\operatorname{E}[\mathbf{X}] = (\operatorname{E}[X_1],...,\operatorname{E}[X_n])^{\mathrm T} </math>|{{EquationRef|Eq.2}}}} |cellpadding= 6 |border |border colour = #0073CF |background colour=#F5FFFA}} ==Covariance and cross-covariance== ===Definitions=== The '''[[covariance matrix]]''' (also called '''second central moment''' or variance-covariance matrix) of an <math>n \times 1</math> random vector is an <math>n \times n</math> [[Matrix (mathematics)|matrix]] whose (''i,j'')<sup>th</sup> element is the [[covariance]] between the ''i''<sup> th</sup> and the ''j''<sup> th</sup> random variables. The covariance matrix is the expected value, element by element, of the <math>n \times n</math> matrix [[matrix multiplication|computed as]] <math>[\mathbf{X}-\operatorname{E}[\mathbf{X}]] [\mathbf{X}-\operatorname{E}[\mathbf{X}]]^T</math>, where the superscript T refers to the transpose of the indicated vector:<ref name=Lapidoth/>{{rp|p. 464}}<ref name=Gubner/>{{rp|p.335}} {{Equation box 1 |indent = |title= |equation = {{NumBlk||<math>\operatorname{K}_{\mathbf{X}\mathbf{X}} = \operatorname{Var}[\mathbf{X}]=\operatorname{E}[(\mathbf{X}-\operatorname{E}[\mathbf{X}])(\mathbf{X}-\operatorname{E}[\mathbf{X}])^{T}] = \operatorname{E}[\mathbf{X} \mathbf{X}^T] - \operatorname{E}[\mathbf{X}]\operatorname{E}[\mathbf{X}]^T</math>|{{EquationRef|Eq.3}}}} |cellpadding= 6 |border |border colour = #0073CF |background colour=#F5FFFA}} By extension, the '''[[cross-covariance matrix]]''' between two random vectors <math>\mathbf{X}</math> and <math>\mathbf{Y}</math> (<math>\mathbf{X}</math> having <math>n</math> elements and <math>\mathbf{Y}</math> having <math>p</math> elements) is the <math>n \times p</math> matrix<ref name=Gubner/>{{rp|p.336}} {{Equation box 1 |indent = |title= |equation = {{NumBlk||<math>\operatorname{K}_{\mathbf{X}\mathbf{Y}} = \operatorname{Cov}[\mathbf{X},\mathbf{Y}]=\operatorname{E}[(\mathbf{X}-\operatorname{E}[\mathbf{X}])(\mathbf{Y}-\operatorname{E}[\mathbf{Y}])^{T}] = \operatorname{E}[\mathbf{X} \mathbf{Y}^T] - \operatorname{E}[\mathbf{X}]\operatorname{E}[\mathbf{Y}]^T</math>|{{EquationRef|Eq.4}}}} |cellpadding= 6 |border |border colour = #0073CF |background colour=#F5FFFA}} where again the matrix expectation is taken element-by-element in the matrix. Here the (''i,j'')<sup>th</sup> element is the covariance between the ''i''<sup> th</sup> element of <math>\mathbf{X}</math> and the ''j''<sup> th</sup> element of <math>\mathbf{Y}</math>. ===Properties=== The covariance matrix is a [[symmetric matrix]], i.e.<ref name=Lapidoth/>{{rp|p. 466}} :<math>\operatorname{K}_{\mathbf{X}\mathbf{X}}^T = \operatorname{K}_{\mathbf{X}\mathbf{X}}</math>. The covariance matrix is a [[positive semidefinite matrix]], i.e.<ref name=Lapidoth/>{{rp|p. 465}} :<math>\mathbf{a}^T \operatorname{K}_{\mathbf{X}\mathbf{X}} \mathbf{a} \ge 0 \quad \text{for all } \mathbf{a} \in \mathbb{R}^n</math>. The cross-covariance matrix <math>\operatorname{Cov}[\mathbf{Y},\mathbf{X}]</math> is simply the transpose of the matrix <math>\operatorname{Cov}[\mathbf{X},\mathbf{Y}]</math>, i.e. :<math>\operatorname{K}_{\mathbf{Y}\mathbf{X}} = \operatorname{K}_{\mathbf{X}\mathbf{Y}}^T</math>. ===Uncorrelatedness=== Two random vectors <math>\mathbf{X}=(X_1,...,X_m)^T </math> and <math>\mathbf{Y}=(Y_1,...,Y_n)^T </math> are called '''uncorrelated''' if :<math>\operatorname{E}[\mathbf{X} \mathbf{Y}^T] = \operatorname{E}[\mathbf{X}]\operatorname{E}[\mathbf{Y}]^T</math>. They are uncorrelated if and only if their cross-covariance matrix <math>\operatorname{K}_{\mathbf{X}\mathbf{Y}}</math> is zero.<ref name=Gubner/>{{rp|p.337}} ==Correlation and cross-correlation== ===Definitions=== The '''[[Autocorrelation matrix|correlation matrix]]''' (also called '''second moment''') of an <math>n \times 1</math> random vector is an <math>n \times n</math> matrix whose (''i,j'')<sup>th</sup> element is the correlation between the ''i''<sup> th</sup> and the ''j''<sup> th</sup> random variables. The correlation matrix is the expected value, element by element, of the <math>n \times n</math> matrix computed as <math>\mathbf{X} \mathbf{X}^T</math>, where the superscript T refers to the transpose of the indicated vector:<ref name=Papoulis>{{cite book |last=Papoulis |first=Athanasius |title=Probability, Random Variables and Stochastic Processes |publisher=McGraw-Hill |edition=Third |year=1991 |isbn=0-07-048477-5 }}</ref>{{rp|p.190}}<ref name=Gubner/>{{rp|p.334}} {{Equation box 1 |indent = |title= |equation = {{NumBlk||<math>\operatorname{R}_{\mathbf{X}\mathbf{X}} = \operatorname{E}[\mathbf{X} \mathbf{X}^{\mathrm T}]</math>|{{EquationRef|Eq.5}}}} |cellpadding= 6 |border |border colour = #0073CF |background colour=#F5FFFA}} By extension, the '''cross-correlation matrix''' between two random vectors <math>\mathbf{X}</math> and <math>\mathbf{Y}</math> (<math>\mathbf{X}</math> having <math>n</math> elements and <math>\mathbf{Y}</math> having <math>p</math> elements) is the <math>n \times p</math> matrix {{Equation box 1 |indent = |title= |equation = {{NumBlk||<math>\operatorname{R}_{\mathbf{X}\mathbf{Y}} = \operatorname{E}[\mathbf{X} \mathbf{Y}^T]</math>|{{EquationRef|Eq.6}}}} |cellpadding= 6 |border |border colour = #0073CF |background colour=#F5FFFA}} ===Properties=== The correlation matrix is related to the covariance matrix by :<math>\operatorname{R}_{\mathbf{X}\mathbf{X}} = \operatorname{K}_{\mathbf{X}\mathbf{X}} + \operatorname{E}[\mathbf{X}]\operatorname{E}[\mathbf{X}]^T</math>. Similarly for the cross-correlation matrix and the cross-covariance matrix: :<math>\operatorname{R}_{\mathbf{X}\mathbf{Y}} = \operatorname{K}_{\mathbf{X}\mathbf{Y}} + \operatorname{E}[\mathbf{X}]\operatorname{E}[\mathbf{Y}]^T</math> ==Orthogonality== Two random vectors of the same size <math>\mathbf{X}=(X_1,...,X_n)^T </math> and <math>\mathbf{Y}=(Y_1,...,Y_n)^T </math> are called '''orthogonal''' if :<math>\operatorname{E}[\mathbf{X}^T \mathbf{Y}] = 0</math>. ==Independence== {{main|Independence (probability theory)}} Two random vectors <math>\mathbf{X}</math> and <math>\mathbf{Y}</math> are called '''independent''' if for all <math>\mathbf{x}</math> and <math>\mathbf{y}</math> :<math>F_{\mathbf{X,Y}}(\mathbf{x,y}) = F_{\mathbf{X}}(\mathbf{x}) \cdot F_{\mathbf{Y}}(\mathbf{y})</math> where <math>F_{\mathbf{X}}(\mathbf{x})</math> and <math>F_{\mathbf{Y}}(\mathbf{y})</math> denote the cumulative distribution functions of <math>\mathbf{X}</math> and <math>\mathbf{Y}</math> and<math>F_{\mathbf{X,Y}}(\mathbf{x,y})</math> denotes their joint cumulative distribution function. Independence of <math>\mathbf{X}</math> and <math>\mathbf{Y}</math> is often denoted by <math>\mathbf{X} \perp\!\!\!\perp \mathbf{Y}</math>. Written component-wise, <math>\mathbf{X}</math> and <math>\mathbf{Y}</math> are called independent if for all <math>x_1,\ldots,x_m,y_1,\ldots,y_n</math> :<math>F_{X_1,\ldots,X_m,Y_1,\ldots,Y_n}(x_1,\ldots,x_m,y_1,\ldots,y_n) = F_{X_1,\ldots,X_m}(x_1,\ldots,x_m) \cdot F_{Y_1,\ldots,Y_n}(y_1,\ldots,y_n)</math>. ==Characteristic function== The [[Characteristic function (probability theory)|characteristic function]] of a random vector <math> \mathbf{X} </math> with <math> n </math> components is a function <math>\mathbb{R}^n \to \mathbb{C}</math> that maps every vector <math>\mathbf{\omega} = (\omega_1,\ldots,\omega_n)^T</math> to a complex number. It is defined by<ref name=Lapidoth/>{{rp|p. 468}} :<math> \varphi_{\mathbf{X}}(\mathbf{\omega}) = \operatorname{E} \left [ e^{i(\mathbf{\omega}^T \mathbf{X})} \right ] = \operatorname{E} \left [ e^{i( \omega_1 X_1 + \ldots + \omega_n X_n)} \right ]</math>. ==Further properties== ===Expectation of a quadratic form=== One can take the expectation of a [[Quadratic form (statistics)|quadratic form]] in the random vector <math>\mathbf{X}</math> as follows:<ref name=Kendrick>{{cite book |last=Kendrick |first=David |title=Stochastic Control for Economic Models |publisher=McGraw-Hill |year=1981 |isbn=0-07-033962-7 }}</ref>{{rp|p.170–171}} :<math>\operatorname{E}[\mathbf{X}^{T}A\mathbf{X}] = \operatorname{E}[\mathbf{X}]^{T}A\operatorname{E}[\mathbf{X}] + \operatorname{tr}(A K_{\mathbf{X}\mathbf{X}}),</math> where <math>K_{\mathbf{X}\mathbf{X}}</math> is the covariance matrix of <math>\mathbf{X}</math> and <math>\operatorname{tr}</math> refers to the [[Trace (linear algebra)|trace]] of a matrix — that is, to the sum of the elements on its main diagonal (from upper left to lower right). Since the quadratic form is a scalar, so is its expectation. '''Proof''': Let <math>\mathbf{z}</math> be an <math>m \times 1</math> random vector with <math>\operatorname{E}[\mathbf{z}] = \mu</math> and <math>\operatorname{Cov}[\mathbf{z}]= V</math> and let <math>A</math> be an <math>m \times m</math> non-stochastic matrix. Then based on the formula for the covariance, if we denote <math>\mathbf{z}^T = \mathbf{X}</math> and <math>\mathbf{z}^T A^T = \mathbf{Y}</math>, we see that: :<math>\operatorname{Cov}[\mathbf{X},\mathbf{Y}] = \operatorname{E}[\mathbf{X}\mathbf{Y}^T]-\operatorname{E}[\mathbf{X}]\operatorname{E}[\mathbf{Y}]^T </math> Hence :<math>\begin{align} \operatorname{E}[XY^T] &= \operatorname{Cov}[X,Y]+\operatorname{E}[X]\operatorname{E}[Y]^T \\ \operatorname{E}[z^T Az] &= \operatorname{Cov}[z^T,z^T A^T] + \operatorname{E}[z^T]\operatorname{E}[z^T A^T ]^T \\ &=\operatorname{Cov}[z^T , z^T A^T] + \mu^T (\mu^T A^T)^T \\ &=\operatorname{Cov}[z^T , z^T A^T] + \mu^T A \mu , \end{align}</math> which leaves us to show that :<math>\operatorname{Cov}[z^T , z^T A^T ]=\operatorname{tr}(AV).</math> This is true based on the fact that one can [[Trace (linear algebra)#Properties|cyclically permute matrices when taking a trace]] without changing the end result (e.g.: <math>\operatorname{tr}(AB) = \operatorname{tr}(BA)</math>). We see [[Covariance#Definition|that]] :<math>\begin{align} \operatorname{Cov}[z^T,z^T A^T] &= \operatorname{E} \left[\left(z^T - E(z^T) \right)\left(z^T A^T - E\left(z^T A^T \right) \right)^T \right] \\ &= \operatorname{E} \left[ (z^T - \mu^T) (z^T A^T - \mu^T A^T )^T \right]\\ &= \operatorname{E} \left[ (z - \mu)^T (Az - A\mu) \right]. \end{align}</math> And since :<math>\left( {z - \mu } \right)^T \left( {Az - A\mu } \right)</math> is a [[scalar (mathematics)|scalar]], then :<math>(z - \mu)^T ( Az - A\mu)= \operatorname{tr}\left( {(z - \mu )^T (Az - A\mu )} \right) = \operatorname{tr} \left((z - \mu )^T A(z - \mu ) \right)</math> trivially. Using the permutation we get: :<math>\operatorname{tr}\left( {(z - \mu )^T A(z - \mu )} \right) = \operatorname{tr}\left( {A(z - \mu )(z - \mu )^T} \right),</math> and by plugging this into the original formula we get: :<math>\begin{align} \operatorname{Cov} \left[ {z^T,z^T A^T} \right] &= E\left[ {\left( {z - \mu } \right)^T (Az - A\mu)} \right] \\ &= E \left[ \operatorname{tr}\left( A(z - \mu )(z - \mu )^T \right) \right] \\ &= \operatorname{tr} \left( {A \cdot \operatorname{E} \left((z - \mu )(z - \mu )^T \right) } \right) \\ &= \operatorname{tr} (A V). \end{align}</math> ===Expectation of the product of two different quadratic forms=== One can take the expectation of the product of two different quadratic forms in a zero-mean [[Joint normality|Gaussian]] random vector <math>\mathbf{X}</math> as follows:<ref name=Kendrick/>{{rp|pp. 162–176}} :<math>\operatorname{E}\left[(\mathbf{X}^{T}A\mathbf{X})(\mathbf{X}^{T}B\mathbf{X})\right] = 2\operatorname{tr}(A K_{\mathbf{X}\mathbf{X}} B K_{\mathbf{X}\mathbf{X}}) + \operatorname{tr}(A K_{\mathbf{X}\mathbf{X}})\operatorname{tr}(B K_{\mathbf{X}\mathbf{X}})</math> where again <math>K_{\mathbf{X}\mathbf{X}}</math> is the covariance matrix of <math>\mathbf{X}</math>. Again, since both quadratic forms are scalars and hence their product is a scalar, the expectation of their product is also a scalar. ==Applications== ===Portfolio theory=== In [[portfolio theory]] in [[finance]], an objective often is to choose a portfolio of risky assets such that the distribution of the random portfolio return has desirable properties. For example, one might want to choose the portfolio return having the lowest variance for a given expected value. Here the random vector is the vector <math>\mathbf{r}</math> of random returns on the individual assets, and the portfolio return ''p'' (a random scalar) is the inner product of the vector of random returns with a vector ''w'' of portfolio weights — the fractions of the portfolio placed in the respective assets. Since ''p'' = ''w''<sup>T</sup><math>\mathbf{r}</math>, the expected value of the portfolio return is ''w''<sup>T</sup>E(<math>\mathbf{r}</math>) and the variance of the portfolio return can be shown to be ''w''<sup>T</sup>C''w'', where C is the covariance matrix of <math>\mathbf{r}</math>. ===Regression theory=== In [[linear regression]] theory, we have data on ''n'' observations on a dependent variable ''y'' and ''n'' observations on each of ''k'' independent variables ''x<sub>j</sub>''. The observations on the dependent variable are stacked into a column vector ''y''; the observations on each independent variable are also stacked into column vectors, and these latter column vectors are combined into a [[design matrix]] ''X'' (not denoting a random vector in this context) of observations on the independent variables. Then the following regression equation is postulated as a description of the process that generated the data: :<math>y = X \beta + e,</math> where β is a postulated fixed but unknown vector of ''k'' response coefficients, and ''e'' is an unknown random vector reflecting random influences on the dependent variable. By some chosen technique such as [[ordinary least squares]], a vector <math>\hat \beta</math> is chosen as an estimate of β, and the estimate of the vector ''e'', denoted <math>\hat e</math>, is computed as :<math>\hat e = y - X \hat \beta.</math> Then the statistician must analyze the properties of <math>\hat \beta</math> and <math>\hat e</math>, which are viewed as random vectors since a randomly different selection of ''n'' cases to observe would have resulted in different values for them. ===Vector time series=== The evolution of a ''k''×1 random vector <math>\mathbf{X}</math> through time can be modelled as a [[vector autoregression]] (VAR) as follows: :<math>\mathbf{X}_t = c + A_1 \mathbf{X}_{t-1} + A_2 \mathbf{X}_{t-2} + \cdots + A_p \mathbf{X}_{t-p} + \mathbf{e}_t, \, </math> where the ''i''-periods-back vector observation <math>\mathbf{X}_{t-i}</math> is called the ''i''-th lag of <math>\mathbf{X}</math>, ''c'' is a ''k'' × 1 vector of constants ([[Y-intercept|intercepts]]), ''A<sub>i</sub>'' is a time-invariant ''k'' × ''k'' [[Matrix (mathematics)|matrix]] and <math>\mathbf{e}_t</math> is a ''k'' × 1 random vector of [[errors and residuals in statistics|error]] terms. ==References== {{reflist}} ==Further reading== * {{cite book |first=Henry |last=Stark |first2=John W. |last2=Woods |title=Probability, Statistics, and Random Processes for Engineers |publisher=Pearson |edition=Fourth |year=2012 |chapter=Random Vectors |pages=295–339 |isbn=978-0-13-231123-6 }} [[Category:Multivariate statistics]] [[Category:Algebra of random variables]] [[de:Zufallsvariable#Mehrdimensionale Zufallsvariable]] [[pl:Zmienna losowa#Uogólnienia]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Broader
(
edit
)
Template:Cite book
(
edit
)
Template:Equation box 1
(
edit
)
Template:Main
(
edit
)
Template:Probability fundamentals
(
edit
)
Template:Reflist
(
edit
)
Template:Rp
(
edit
)
Template:Short description
(
edit
)