Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Propagation of uncertainty
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Non-linear combinations == {{See also|Taylor expansions for the moments of functions of random variables}} When ''f'' is a set of non-linear combination of the variables ''x'', an [[interval propagation]] could be performed in order to compute intervals which contain all consistent values for the variables. In a probabilistic approach, the function ''f'' must usually be linearised by approximation to a first-order [[Taylor series]] expansion, though in some cases, exact formulae can be derived that do not depend on the expansion as is the case for the exact variance of products.<ref name="Goodman1960">{{Cite journal | last = Goodman | first= Leo | author-link = Leo Goodman | title = On the Exact Variance of Products | journal = Journal of the American Statistical Association | year = 1960 | volume = 55 | issue = 292 | pages = 708β713 | doi = 10.2307/2281592 | jstor=2281592 }}</ref> The Taylor expansion would be: <math display="block">f_k \approx f^0_k+ \sum_i^n \frac{\partial f_k}{\partial {x_i}} x_i </math> where <math>\partial f_k/\partial x_i</math> denotes the [[partial derivative]] of ''f<sub>k</sub>'' with respect to the ''i''-th variable, evaluated at the mean value of all components of vector ''x''. Or in [[matrix notation]], <math display="block">\mathrm{f} \approx \mathrm{f}^0 + \mathrm{J} \mathrm{x}\,</math> where J is the [[Jacobian matrix]]. Since f<sup>0</sup> is a constant it does not contribute to the error on f. Therefore, the propagation of error follows the linear case, above, but replacing the linear coefficients, ''A<sub>ki</sub>'' and ''A<sub>kj</sub>'' by the partial derivatives, <math>\frac{\partial f_k}{\partial x_i}</math> and <math>\frac{\partial f_k}{\partial x_j}</math>. In matrix notation,<ref>Ochoa1, Benjamin; Belongie, Serge [http://vision.ucsd.edu/sites/default/files/ochoa06.pdf "Covariance Propagation for Guided Matching"] {{Webarchive|url=https://web.archive.org/web/20110720080130/http://vision.ucsd.edu/sites/default/files/ochoa06.pdf |date=2011-07-20 }}</ref> <math display="block">\mathrm{\Sigma}^\mathrm{f} = \mathrm{J} \mathrm{\Sigma}^\mathrm{x} \mathrm{J}^\top.</math> That is, the Jacobian of the function is used to transform the rows and columns of the variance-covariance matrix of the argument. Note this is equivalent to the matrix expression for the linear case with <math>\mathrm{J = A}</math>. === Simplification === Neglecting correlations or assuming independent variables yields a common formula among engineers and experimental scientists to calculate error propagation, the variance formula:<ref>{{cite journal|last=Ku|first=H. H.|date=October 1966 |title=Notes on the use of propagation of error formulas|url=http://nistdigitalarchives.contentdm.oclc.org/cdm/compoundobject/collection/p16009coll6/id/99848/rec/1|journal=Journal of Research of the National Bureau of Standards | volume=70C|issue=4|page=262|doi=10.6028/jres.070c.025|issn=0022-4316|access-date=3 October 2012|doi-access=free}}</ref> <math display="block">s_f = \sqrt{ \left(\frac{\partial f}{\partial x}\right)^2 s_x^2 + \left(\frac{\partial f}{\partial y} \right)^2 s_y^2 + \left(\frac{\partial f}{\partial z} \right)^2 s_z^2 + \cdots}</math> where <math>s_f</math> represents the standard deviation of the function <math>f</math>, <math>s_x</math> represents the standard deviation of <math>x</math>, <math>s_y</math> represents the standard deviation of <math>y</math>, and so forth. This formula is based on the linear characteristics of the gradient of <math>f</math> and therefore it is a good estimation for the standard deviation of <math>f</math> as long as <math>s_x, s_y, s_z,\ldots</math> are small enough. Specifically, the linear approximation of <math>f</math> has to be close to <math>f</math> inside a neighbourhood of radius <math>s_x, s_y, s_z,\ldots</math>.<ref>{{Cite book |last=Clifford |first=A. A. |title=Multivariate error analysis: a handbook of error propagation and calculation in many-parameter systems |publisher=John Wiley & Sons |year=1973 |isbn=978-0470160558 }}{{page needed|date=October 2012}}</ref> === Example === Any non-linear differentiable function, <math>f(a,b)</math>, of two variables, <math>a</math> and <math>b</math>, can be expanded as <math display="block">f\approx f^0+\frac{\partial f}{\partial a}a+\frac{\partial f}{\partial b}b.</math> If we take the variance on both sides and use the formula<ref>{{Cite web| last=Soch|first=Joram| date=2020-07-07| title=Variance of the linear combination of two random variables|url=https://statproofbook.github.io/P/var-lincomb.html| access-date=2022-01-29| website=The Book of Statistical Proofs|language=en}}</ref> for the variance of a linear combination of variables <math display="block">\operatorname{Var}(aX + bY) = a^2\operatorname{Var}(X) + b^2\operatorname{Var}(Y) + 2ab \operatorname{Cov}(X, Y),</math> then we obtain <math display="block">\sigma^2_f\approx\left| \frac{\partial f}{\partial a}\right| ^2\sigma^2_a+\left| \frac{\partial f}{\partial b}\right|^2\sigma^2_b+2\frac{\partial f}{\partial a}\frac{\partial f} {\partial b}\sigma_{ab},</math> where <math>\sigma_{f}</math> is the standard deviation of the function <math>f</math>, <math>\sigma_{a}</math> is the standard deviation of <math>a</math>, <math>\sigma_{b}</math> is the standard deviation of <math>b</math> and <math>\sigma_{ab} = \sigma_{a}\sigma_{b} \rho_{ab}</math> is the covariance between <math>a</math> and <math>b</math>. In the particular case that {{nowrap|<math>f = ab</math>,}} {{nowrap|<math>\frac{\partial f}{\partial a} = b</math>,}} {{nowrap|<math>\frac{\partial f}{\partial b} = a</math>.}} Then <math display="block">\sigma^2_f \approx b^2\sigma^2_a+a^2 \sigma_b^2+2ab\,\sigma_{ab}</math> or <math display="block">\left(\frac{\sigma_f}{f}\right)^2 \approx \left(\frac{\sigma_a}{a} \right)^2 + \left(\frac{\sigma_b}{b}\right)^2 + 2\left(\frac{\sigma_a}{a}\right)\left(\frac{\sigma_b}{b}\right)\rho_{ab}</math> where <math>\rho_{ab}</math> is the correlation between <math>a</math> and <math>b</math>. When the variables <math>a</math> and <math>b</math> are uncorrelated, <math>\rho_{ab}=0</math>. Then <math display="block">\left(\frac{\sigma_f}{f}\right)^2 \approx \left(\frac{\sigma_a}{a} \right)^2 + \left(\frac{\sigma_b}{b}\right)^2.</math> ===Caveats and warnings=== Error estimates for non-linear functions are [[Bias of an estimator|biased]] on account of using a truncated series expansion. The extent of this bias depends on the nature of the function. For example, the bias on the error calculated for log(1+''x'') increases as ''x'' increases, since the expansion to ''x'' is a good approximation only when ''x'' is near zero. For highly non-linear functions, there exist five categories of probabilistic approaches for uncertainty propagation;<ref>{{cite journal |first1=S. H. |last1=Lee |first2=W. |last2=Chen |title=A comparative study of uncertainty propagation methods for black-box-type problems |journal=Structural and Multidisciplinary Optimization |volume=37 |issue=3 |year=2009 |pages=239β253 |doi=10.1007/s00158-008-0234-7 |s2cid=119988015 }}</ref> see [[Uncertainty quantification#Forward propagation|Uncertainty quantification]] for details. ====Reciprocal and shifted reciprocal==== {{main|Reciprocal normal distribution}} In the special case of the inverse or reciprocal <math>1/B</math>, where <math>B=N(0,1)</math> follows a [[standard normal distribution]], the resulting distribution is a reciprocal standard normal distribution, and there is no definable variance.<ref name=Johnson>{{cite book | last1 = Johnson | first1 = Norman L. | last2 = Kotz | first2 = Samuel | last3 = Balakrishnan | first3 = Narayanaswamy | title = Continuous Univariate Distributions, Volume 1 | year = 1994 | publisher = Wiley | isbn=0-471-58495-9 | pages = 171 }}</ref> However, in the slightly more general case of a shifted reciprocal function <math>1/(p-B)</math> for <math>B=N(\mu,\sigma)</math> following a general normal distribution, then mean and variance statistics do exist in a [[principal value]] sense, if the difference between the pole <math>p</math> and the mean <math>\mu</math> is real-valued.<ref name=lecomte2013exact>{{Cite journal | last1= Lecomte | first1 = Christophe | title = Exact statistics of systems with uncertainties: an analytical theory of rank-one stochastic dynamic systems | journal = Journal of Sound and Vibration | volume = 332 | issue = 11 | date = May 2013 | pages = 2750β2776 | doi = 10.1016/j.jsv.2012.12.009 | bibcode = 2013JSV...332.2750L }}</ref> ====Ratios==== {{main|Normal ratio distribution}} Ratios are also problematic; normal approximations exist under certain conditions.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)