Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Multivariate random variable
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Applications== ===Portfolio theory=== In [[portfolio theory]] in [[finance]], an objective often is to choose a portfolio of risky assets such that the distribution of the random portfolio return has desirable properties. For example, one might want to choose the portfolio return having the lowest variance for a given expected value. Here the random vector is the vector <math>\mathbf{r}</math> of random returns on the individual assets, and the portfolio return ''p'' (a random scalar) is the inner product of the vector of random returns with a vector ''w'' of portfolio weights β the fractions of the portfolio placed in the respective assets. Since ''p'' = ''w''<sup>T</sup><math>\mathbf{r}</math>, the expected value of the portfolio return is ''w''<sup>T</sup>E(<math>\mathbf{r}</math>) and the variance of the portfolio return can be shown to be ''w''<sup>T</sup>C''w'', where C is the covariance matrix of <math>\mathbf{r}</math>. ===Regression theory=== In [[linear regression]] theory, we have data on ''n'' observations on a dependent variable ''y'' and ''n'' observations on each of ''k'' independent variables ''x<sub>j</sub>''. The observations on the dependent variable are stacked into a column vector ''y''; the observations on each independent variable are also stacked into column vectors, and these latter column vectors are combined into a [[design matrix]] ''X'' (not denoting a random vector in this context) of observations on the independent variables. Then the following regression equation is postulated as a description of the process that generated the data: :<math>y = X \beta + e,</math> where Ξ² is a postulated fixed but unknown vector of ''k'' response coefficients, and ''e'' is an unknown random vector reflecting random influences on the dependent variable. By some chosen technique such as [[ordinary least squares]], a vector <math>\hat \beta</math> is chosen as an estimate of Ξ², and the estimate of the vector ''e'', denoted <math>\hat e</math>, is computed as :<math>\hat e = y - X \hat \beta.</math> Then the statistician must analyze the properties of <math>\hat \beta</math> and <math>\hat e</math>, which are viewed as random vectors since a randomly different selection of ''n'' cases to observe would have resulted in different values for them. ===Vector time series=== The evolution of a ''k''Γ1 random vector <math>\mathbf{X}</math> through time can be modelled as a [[vector autoregression]] (VAR) as follows: :<math>\mathbf{X}_t = c + A_1 \mathbf{X}_{t-1} + A_2 \mathbf{X}_{t-2} + \cdots + A_p \mathbf{X}_{t-p} + \mathbf{e}_t, \, </math> where the ''i''-periods-back vector observation <math>\mathbf{X}_{t-i}</math> is called the ''i''-th lag of <math>\mathbf{X}</math>, ''c'' is a ''k'' Γ 1 vector of constants ([[Y-intercept|intercepts]]), ''A<sub>i</sub>'' is a time-invariant ''k'' Γ ''k'' [[Matrix (mathematics)|matrix]] and <math>\mathbf{e}_t</math> is a ''k'' Γ 1 random vector of [[errors and residuals in statistics|error]] terms.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)