Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Errors and residuals
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Regressions== In [[regression analysis]], the distinction between ''errors'' and ''residuals'' is subtle and important, and leads to the concept of [[studentized residual]]s. Given an unobservable function that relates the independent variable to the dependent variable β say, a line β the deviations of the dependent variable observations from this function are the unobservable errors. If one runs a regression on some data, then the deviations of the dependent variable observations from the ''fitted'' function are the residuals. If the linear model is applicable, a scatterplot of residuals plotted against the independent variable should be random about zero with no trend to the residuals.<ref name="modernintro"/> If the data exhibit a trend, the regression model is likely incorrect; for example, the true function may be a quadratic or higher order polynomial. If they are random, or have no trend, but "fan out" - they exhibit a phenomenon called [[heteroscedasticity]]. If all of the residuals are equal, or do not fan out, they exhibit [[homoscedasticity]]. However, a terminological difference arises in the expression [[mean squared error]] (MSE). The mean squared error of a regression is a number computed from the sum of squares of the computed ''residuals'', and not of the unobservable ''errors''. If that sum of squares is divided by ''n'', the number of observations, the result is the mean of the squared residuals. Since this is a [[bias (statistics)|biased]] estimate of the variance of the unobserved errors, the bias is removed by dividing the sum of the squared residuals by ''df'' = ''n'' β ''p'' β 1, instead of ''n'', where ''df'' is the number of [[degrees of freedom (statistics)|degrees of freedom]] (''n'' minus the number of parameters (excluding the intercept) p being estimated - 1). This forms an unbiased estimate of the variance of the unobserved errors, and is called the mean squared error.<ref>{{cite book|last1=Steel|first1=Robert G. D.|last2=Torrie|first2=James H.|title=Principles and Procedures of Statistics, with Special Reference to Biological Sciences|url=https://archive.org/details/principlesproce00stee|url-access=registration|year=1960|publisher=McGraw-Hill|page=[https://archive.org/details/principlesproce00stee/page/288 288]}}</ref> Another method to calculate the mean square of error when analyzing the variance of linear regression using a technique like that used in [[ANOVA]] (they are the same because ANOVA is a type of regression), the sum of squares of the residuals (aka sum of squares of the error) is divided by the degrees of freedom (where the degrees of freedom equal ''n'' β ''p'' β 1, where ''p'' is the number of parameters estimated in the model (one for each variable in the regression equation, not including the intercept)). One can then also calculate the mean square of the model by dividing the sum of squares of the model minus the degrees of freedom, which is just the number of parameters. Then the F value can be calculated by dividing the mean square of the model by the mean square of the error, and we can then determine significance (which is why you want the mean squares to begin with.).<ref>{{cite book|last1=Zelterman|first1=Daniel|title=Applied linear models with SAS|date=2010|publisher=Cambridge University Press|location=Cambridge|isbn=9780521761598|edition=Online-Ausg.}}</ref> However, because of the behavior of the process of regression, the ''distributions'' of residuals at different data points (of the input variable) may vary ''even if'' the errors themselves are identically distributed. Concretely, in a [[linear regression]] where the errors are identically distributed, the variability of residuals of inputs in the middle of the domain will be ''higher'' than the variability of residuals at the ends of the domain:<ref>{{Cite web|url=https://stats.libretexts.org/Bookshelves/Introductory_Statistics/Book%3A_OpenIntro_Statistics_(Diez_et_al)./7%3A_Introduction_to_Linear_Regression/7.3%3A_Types_of_Outliers_in_Linear_Regression|title=7.3: Types of Outliers in Linear Regression|date=2013-11-21|website=Statistics LibreTexts|language=en|access-date=2019-11-22}}</ref> linear regressions fit endpoints better than the middle. This is also reflected in the [[Influence function (statistics)|influence functions]] of various data points on the [[regression coefficient]]s: endpoints have more influence. Thus to compare residuals at different inputs, one needs to adjust the residuals by the expected variability of ''residuals,'' which is called [[studentizing]]. This is particularly important in the case of detecting [[outliers]], where the case in question is somehow different from the others in a dataset. For example, a large residual may be expected in the middle of the domain, but considered an outlier at the end of the domain.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)