Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Propagation of uncertainty
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Linear combinations== Let <math>\{f_k(x_1, x_2, \dots, x_n)\}</math> be a set of ''m'' functions, which are linear combinations of <math>n</math> variables <math>x_1, x_2, \dots, x_n</math> with combination coefficients <math>A_{k1}, A_{k2}, \dots,A_{kn}, (k = 1, \dots, m)</math>: <math display="block">f_k = \sum_{i=1}^n A_{ki} x_i,</math> or in matrix notation, <math display="block">\mathbf{f} = \mathbf{A} \mathbf{x}.</math> Also let the [[variance–covariance matrix]] of {{math|1=''x'' = (''x''<sub>1</sub>, ..., ''x''<sub>''n''</sub>)}} be denoted by <math>\boldsymbol\Sigma^x</math> and let the mean value be denoted by <math>\boldsymbol{\mu}</math>: <math display="block">\begin{align} \boldsymbol\Sigma^x = \operatorname{E}[(\mathbf{x}-\boldsymbol\mu)\otimes (\mathbf{x}-\boldsymbol\mu)] &= \begin{pmatrix} \sigma^2_1 & \sigma_{12} & \sigma_{13} & \cdots \\ \sigma_{21} & \sigma^2_2 & \sigma_{23} & \cdots\\ \sigma_{31} & \sigma_{32} & \sigma^2_3 & \cdots \\ \vdots & \vdots & \vdots & \ddots \end{pmatrix} \\[1ex] &= \begin{pmatrix} {\Sigma}^x_{11} & {\Sigma}^x_{12} & {\Sigma}^x_{13} & \cdots \\ {\Sigma}^x_{21} & {\Sigma}^x_{22} & {\Sigma}^x_{23} & \cdots \\ {\Sigma}^x_{31} & {\Sigma}^x_{32} & {\Sigma}^x_{33} & \cdots \\ \vdots & \vdots & \vdots & \ddots \end{pmatrix}. \end{align} </math> <math>\otimes</math> is the [[outer product]]. Then, the variance–covariance matrix <math>\boldsymbol\Sigma^f</math> of ''f'' is given by <math display="block">\begin{align} \boldsymbol\Sigma^f &= \operatorname{E}\left[(\mathbf{f} - \operatorname{E}[\mathbf{f}]) \otimes (\mathbf{f} - \operatorname{E}[\mathbf{f}])\right] = \operatorname{E}\left[\mathbf{A}(\mathbf{x}-\boldsymbol\mu) \otimes \mathbf{A}(\mathbf{x}-\boldsymbol\mu)\right] \\[1ex] &= \mathbf{A} \operatorname{E}\left[(\mathbf{x}-\boldsymbol\mu) \otimes (\mathbf{x}-\boldsymbol\mu)\right] \mathbf{A}^\mathrm{T} = \mathbf{A} \boldsymbol\Sigma^x \mathbf{A}^\mathrm{T}. \end{align}</math> In component notation, the equation <math display="block">\boldsymbol\Sigma^f = \mathbf{A} \boldsymbol\Sigma^x \mathbf{A}^\mathrm{T}</math> reads <math display="block">\Sigma^f_{ij} = \sum_k^n \sum_l^n A_{ik} {\Sigma}^x_{kl} A_{jl}.</math> This is the most general expression for the propagation of error from one set of variables onto another. When the errors on ''x'' are uncorrelated, the general expression simplifies to <math display="block">\Sigma^f_{ij} = \sum_k^n A_{ik} \Sigma^x_k A_{jk},</math> where <math>\Sigma^x_k = \sigma^2_{x_k}</math> is the variance of ''k''-th element of the ''x'' vector. Note that even though the errors on ''x'' may be uncorrelated, the errors on ''f'' are in general correlated; in other words, even if <math>\boldsymbol\Sigma^x</math> is a diagonal matrix, <math>\boldsymbol\Sigma^f</math> is in general a full matrix. The general expressions for a scalar-valued function ''f'' are a little simpler (here '''a''' is a row vector): <math display="block">f = \sum_i^n a_i x_i = \mathbf{a x},</math> <math display="block">\sigma^2_f = \sum_i^n \sum_j^n a_i \Sigma^x_{ij} a_j = \mathbf{a} \boldsymbol\Sigma^x \mathbf{a}^\mathrm{T}.</math> Each covariance term <math>\sigma_{ij}</math> can be expressed in terms of the [[Pearson product-moment correlation coefficient|correlation coefficient]] <math>\rho_{ij}</math> by <math>\sigma_{ij} = \rho_{ij} \sigma_i \sigma_j</math>, so that an alternative expression for the variance of ''f'' is <math display="block">\sigma^2_f = \sum_i^n a_i^2 \sigma^2_i + \sum_i^n \sum_{j (j \ne i)}^n a_i a_j \rho_{ij} \sigma_i \sigma_j.</math> In the case that the variables in ''x'' are uncorrelated, this simplifies further to <math display="block">\sigma^2_f = \sum_i^n a_i^2 \sigma^2_i.</math> In the simple case of identical coefficients and variances, we find <math display="block">\sigma_f = \sqrt{n}\, |a| \sigma.</math> For the arithmetic mean, <math>a=1/n</math>, the result is the [[standard error of the mean]]: <math display="block">\sigma_f = \frac{\sigma} {\sqrt{n}}.</math>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)