Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Errors and residuals
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Statistics concept}} {{more footnotes|date=September 2016}} {{Regression bar}} In [[statistics]] and [[mathematical optimization|optimization]], '''errors''' and '''residuals''' are two closely related and easily confused measures of the [[deviation (statistics)|deviation]] of an [[observed value]] of an [[Elementary event|element]] of a [[Sample (statistics)|statistical sample]] from its "[[true value]]" (not necessarily observable). The '''error''' of an [[observation]] is the deviation of the observed value from the true value of a quantity of interest (for example, a [[population mean]]). The '''residual''' is the difference between the observed value and the ''[[Estimation|estimated]]'' value of the quantity of interest (for example, a [[sample mean]]). The distinction is most important in [[regression analysis]], where the concepts are sometimes called the '''regression errors''' and '''regression residuals''' and where they lead to the concept of [[studentized residual]]s. In [[econometrics]], "errors" are also called '''disturbances'''.<ref name="Kennedy 2008 p. 576">{{cite book|last=Kennedy|first=P.|title=A Guide to Econometrics|publisher=Wiley|year=2008|isbn=978-1-4051-8257-7|url=https://books.google.com/books?id=69MDEAAAQBAJ&pg=PA576|access-date=2022-05-13|page=576}}</ref><ref name="Wooldridge 2019 p. 57">{{cite book|last=Wooldridge|first=J.M.|title=Introductory Econometrics: A Modern Approach|publisher=Cengage Learning|year=2019|isbn=978-1-337-67133-0|url=https://books.google.com/books?id=TONhEAAAQBAJ&pg=PA57|access-date=2022-05-13|page=57}}</ref><ref name="Das 2019 p. 7">{{cite book|last=Das|first=P.|title=Econometrics in Theory and Practice: Analysis of Cross Section, Time Series and Panel Data with Stata 15.1|publisher=Springer Singapore|year=2019|isbn=978-981-329-019-8|url=https://books.google.com/books?id=rK6tDwAAQBAJ&pg=PA7|access-date=2022-05-13|page=7}}</ref> ==Introduction== Suppose there is a series of observations from a [[univariate distribution]] and we want to estimate the [[mean]] of that distribution (the so-called [[location model (statistics)|location model]]). In this case, the errors are the deviations of the observations from the population mean, while the residuals are the deviations of the observations from the sample mean. A '''statistical error''' (or '''disturbance''') is the amount by which an observation differs from its [[expected value]], the latter being based on the whole [[statistical population|population]] from which the statistical unit was chosen randomly. For example, if the mean height in a population of 21-year-old men is 1.75 meters, and one randomly chosen man is 1.80 meters tall, then the "error" is 0.05 meters; if the randomly chosen man is 1.70 meters tall, then the "error" is β0.05 meters. The expected value, being the [[arithmetic mean|mean]] of the entire population, is typically unobservable, and hence the statistical error cannot be observed either. A '''residual''' (or fitting deviation), on the other hand, is an observable ''estimate'' of the unobservable statistical error. Consider the previous example with men's heights and suppose we have a random sample of ''n'' people. The ''[[sample mean]]'' could serve as a good estimator of the ''population'' mean. Then we have: * The difference between the height of each man in the sample and the unobservable ''population'' mean is a ''statistical error'', whereas * The difference between the height of each man in the sample and the observable ''sample'' mean is a ''residual''. Note that, because of the definition of the sample mean, the sum of the residuals within a random sample is necessarily zero, and thus the residuals are necessarily ''not [[statistical independence|independent]]''. The statistical errors, on the other hand, are independent, and their sum within the random sample is [[almost surely]] not zero. One can standardize statistical errors (especially of a [[normal distribution]]) in a [[z-score]] (or "standard score"), and standardize residuals in a [[t-statistic|''t''-statistic]], or more generally [[studentized residuals]]. ==In univariate distributions== If we assume a normally distributed population with mean ΞΌ and [[standard deviation]] Ο, and choose individuals independently, then we have :<math>X_1, \dots, X_n \sim N\left(\mu, \sigma^2\right)\,</math> and the [[arithmetic mean|sample mean]] :<math>\overline{X}={X_1 + \cdots + X_n \over n}</math> is a random variable distributed such that: :<math>\overline{X} \sim N \left(\mu, \frac {\sigma^2} n \right).</math> The ''statistical errors'' are then :<math>e_i = X_i - \mu,\,</math> with [[Expected Value|expected]] values of zero,<ref>{{Cite book|title=Intermediate statistical methods|last=Wetherill, G. Barrie.|date=1981|publisher=Chapman and Hall|isbn=0-412-16440-X|location=London|oclc=7779780|url-access=registration|url=https://archive.org/details/intermediatestat0000weth}}</ref> whereas the ''residuals'' are :<math>r_i = X_i - \overline{X}.</math> The sum of squares of the '''statistical errors''', divided by ''Ο''<sup>2</sup>, has a [[chi-squared distribution]] with ''n'' [[Degrees of freedom (statistics)|degrees of freedom]]: : <math>\frac 1 {\sigma^2}\sum_{i=1}^n e_i^2\sim\chi^2_n.</math> However, this quantity is not observable as the population mean is unknown. The sum of squares of the '''residuals''', on the other hand, is observable. The quotient of that sum by Ο<sup>2</sup> has a chi-squared distribution with only ''n'' β 1 degrees of freedom: :<math> \frac 1 {\sigma^2} \sum_{i=1}^n r_i^2 \sim \chi^2_{n-1}. </math> This difference between ''n'' and ''n'' β 1 degrees of freedom results in [[Bessel's correction]] for the estimation of [[sample variance]] of a population with unknown mean and unknown variance. No correction is necessary if the population mean is known. ===Remark=== It is remarkable that the [[Squared deviations|sum of squares of the residuals]] and the sample mean can be shown to be independent of each other, using, e.g. [[Basu's theorem]].<!-- Basu's theorem is definitely overkill in this case. It can be proved by far simpler methods. --> That fact, and the normal and chi-squared distributions given above form the basis of calculations involving the t-statistic: :<math> T = \frac{\overline{X}_n - \mu_0}{S_n/\sqrt{n}}, </math> where <math>\overline{X}_n - \mu_0</math> represents the errors, <math>S_n</math> represents the sample standard deviation for a sample of size ''n'', and unknown ''Ο'', and the denominator term <math>S_n/\sqrt n</math> accounts for the standard deviation of the errors according to:<ref name="modernintro">{{Cite book|title=A modern introduction to probability and statistics : understanding why and how|date=2005-06-15|publisher=Springer London|author1=Frederik Michel Dekking|author2=Cornelis Kraaikamp|author3=Hendrik Paul LopuhaΓ€|author4=Ludolf Erwin Meester|isbn=978-1-85233-896-1|location=London|oclc=262680588}}</ref> <math display="block">\operatorname{Var}\left(\overline{X}_n\right) = \frac{\sigma^2} n</math> The [[probability distribution]]s of the numerator and the denominator separately depend on the value of the unobservable population standard deviation ''Ο'', but ''Ο'' appears in both the numerator and the denominator and cancels. That is fortunate because it means that even though we do not know ''Ο'', we know the probability distribution of this quotient: it has a [[Student's t-distribution]] with ''n'' β 1 degrees of freedom. We can therefore use this quotient to find a [[confidence interval]] for ''ΞΌ''. This t-statistic can be interpreted as "the number of standard errors away from the regression line."<ref>{{Cite book|title=Practical statistics for data scientists : 50 essential concepts|author1=Peter Bruce|author2=Andrew Bruce|isbn=978-1-4919-5296-2|edition=First|publisher=O'Reilly Media Inc|location=Sebastopol, CA|oclc=987251007|date=2017-05-10}}</ref> ==Regressions== In [[regression analysis]], the distinction between ''errors'' and ''residuals'' is subtle and important, and leads to the concept of [[studentized residual]]s. Given an unobservable function that relates the independent variable to the dependent variable β say, a line β the deviations of the dependent variable observations from this function are the unobservable errors. If one runs a regression on some data, then the deviations of the dependent variable observations from the ''fitted'' function are the residuals. If the linear model is applicable, a scatterplot of residuals plotted against the independent variable should be random about zero with no trend to the residuals.<ref name="modernintro"/> If the data exhibit a trend, the regression model is likely incorrect; for example, the true function may be a quadratic or higher order polynomial. If they are random, or have no trend, but "fan out" - they exhibit a phenomenon called [[heteroscedasticity]]. If all of the residuals are equal, or do not fan out, they exhibit [[homoscedasticity]]. However, a terminological difference arises in the expression [[mean squared error]] (MSE). The mean squared error of a regression is a number computed from the sum of squares of the computed ''residuals'', and not of the unobservable ''errors''. If that sum of squares is divided by ''n'', the number of observations, the result is the mean of the squared residuals. Since this is a [[bias (statistics)|biased]] estimate of the variance of the unobserved errors, the bias is removed by dividing the sum of the squared residuals by ''df'' = ''n'' β ''p'' β 1, instead of ''n'', where ''df'' is the number of [[degrees of freedom (statistics)|degrees of freedom]] (''n'' minus the number of parameters (excluding the intercept) p being estimated - 1). This forms an unbiased estimate of the variance of the unobserved errors, and is called the mean squared error.<ref>{{cite book|last1=Steel|first1=Robert G. D.|last2=Torrie|first2=James H.|title=Principles and Procedures of Statistics, with Special Reference to Biological Sciences|url=https://archive.org/details/principlesproce00stee|url-access=registration|year=1960|publisher=McGraw-Hill|page=[https://archive.org/details/principlesproce00stee/page/288 288]}}</ref> Another method to calculate the mean square of error when analyzing the variance of linear regression using a technique like that used in [[ANOVA]] (they are the same because ANOVA is a type of regression), the sum of squares of the residuals (aka sum of squares of the error) is divided by the degrees of freedom (where the degrees of freedom equal ''n'' β ''p'' β 1, where ''p'' is the number of parameters estimated in the model (one for each variable in the regression equation, not including the intercept)). One can then also calculate the mean square of the model by dividing the sum of squares of the model minus the degrees of freedom, which is just the number of parameters. Then the F value can be calculated by dividing the mean square of the model by the mean square of the error, and we can then determine significance (which is why you want the mean squares to begin with.).<ref>{{cite book|last1=Zelterman|first1=Daniel|title=Applied linear models with SAS|date=2010|publisher=Cambridge University Press|location=Cambridge|isbn=9780521761598|edition=Online-Ausg.}}</ref> However, because of the behavior of the process of regression, the ''distributions'' of residuals at different data points (of the input variable) may vary ''even if'' the errors themselves are identically distributed. Concretely, in a [[linear regression]] where the errors are identically distributed, the variability of residuals of inputs in the middle of the domain will be ''higher'' than the variability of residuals at the ends of the domain:<ref>{{Cite web|url=https://stats.libretexts.org/Bookshelves/Introductory_Statistics/Book%3A_OpenIntro_Statistics_(Diez_et_al)./7%3A_Introduction_to_Linear_Regression/7.3%3A_Types_of_Outliers_in_Linear_Regression|title=7.3: Types of Outliers in Linear Regression|date=2013-11-21|website=Statistics LibreTexts|language=en|access-date=2019-11-22}}</ref> linear regressions fit endpoints better than the middle. This is also reflected in the [[Influence function (statistics)|influence functions]] of various data points on the [[regression coefficient]]s: endpoints have more influence. Thus to compare residuals at different inputs, one needs to adjust the residuals by the expected variability of ''residuals,'' which is called [[studentizing]]. This is particularly important in the case of detecting [[outliers]], where the case in question is somehow different from the others in a dataset. For example, a large residual may be expected in the middle of the domain, but considered an outlier at the end of the domain. ==Other uses of the word "error" in statistics== {{see also|Bias (statistics)}} The use of the term "error" as discussed in the sections above is in the sense of a deviation of a value from a hypothetical unobserved value. At least two other uses also occur in statistics, both referring to observable [[prediction error]]s: The ''[[mean squared error]]'' (MSE) refers to the amount by which the values predicted by an estimator differ from the quantities being estimated (typically outside the sample from which the model was estimated). The ''[[root mean square error]]'' (RMSE) is the square root of MSE. The ''sum of squares of errors'' (SSE) is the MSE multiplied by the sample size. ''[[Sum of squares of residuals]]'' (SSR) is the sum of the squares of the deviations of the actual values from the predicted values, within the sample used for estimation. This is the basis for the [[least squares]] estimate, where the regression coefficients are chosen such that the SSR is minimal (i.e. its derivative is zero). Likewise, the ''[[sum of absolute errors]]'' (SAE) is the sum of the absolute values of the residuals, which is minimized in the [[least absolute deviations]] approach to regression. The '''mean error''' (ME) is the bias. The ''mean residual'' (MR) is always zero for least-squares estimators. ==See also== {{Portal|Mathematics}} {{Div col|colwidth=20em}} * [[Absolute deviation]] * [[Consensus forecasts]] * [[Error detection and correction]] * [[Explained sum of squares]] * [[Innovation (signal processing)]] * [[Lack-of-fit sum of squares]] * [[Margin of error]] * [[Mean absolute error]] * [[Observational error]] * [[Propagation of error]] * [[Probable error]] * [[Random and systematic errors]] * [[Reduced chi-squared statistic]] * [[Regression dilution]] * [[Sampling error]] * [[Standard error]] * [[Studentized residual]] * [[Type I and type II errors]] {{Div col end}} ==References== {{reflist}} ==Further reading== *{{cite book|last1=Cook|first1=R. Dennis|last2=Weisberg|first2=Sanford|title=Residuals and Influence in Regression.|year=1982|publisher=[[Chapman and Hall]]|location=New York|isbn=041224280X|url=http://www.stat.umn.edu/rir/|edition=Repr.|access-date=23 February 2013}} *{{cite journal|last1=Cox|first1=David R.|author-link1=David R. Cox|last2=Snell|first2=E. Joyce|author-link2=Joyce Snell|title=A general definition of residuals|year=1968|journal=[[Journal of the Royal Statistical Society, Series B]]|pages=248β275|jstor=2984505|volume=30|issue=2}} *{{cite book|last=Weisberg|first=Sanford|title=Applied Linear Regression|year=1985|publisher=Wiley|location=New York|isbn=9780471879572|url=https://books.google.com/books?id=yRrvAAAAMAAJ&q=editions:UMM1U2yvYVUC|edition=2nd|access-date=23 February 2013}} *{{springer|title=Errors, theory of|id=p/e036240}} ==External links== *{{Commons category-inline}} {{least squares and regression analysis|state=expanded}} {{Statistics|correlation|state=collapsed}} {{DEFAULTSORT:Errors And Residuals In Statistics}} [[Category:Errors and residuals|*]] [[Category:Statistical deviation and dispersion]] [[Category:Regression analysis]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Commons category-inline
(
edit
)
Template:Div col
(
edit
)
Template:Div col end
(
edit
)
Template:Least squares and regression analysis
(
edit
)
Template:More footnotes
(
edit
)
Template:Portal
(
edit
)
Template:Reflist
(
edit
)
Template:Regression bar
(
edit
)
Template:See also
(
edit
)
Template:Short description
(
edit
)
Template:Springer
(
edit
)
Template:Statistics
(
edit
)