Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Studentized residual
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Kind of ratio}} {{broader|Studentization}} {{Multiple issues| {{refimprove|date=May 2015}} {{disputed|date=February 2014}} }} {{Regression bar}} In [[statistics]], a '''studentized residual''' is the [[dimensionless ratio]] resulting from the division of a [[errors and residuals in statistics|residual]] by an [[estimator|estimate]] of its [[standard deviation]], both expressed in the same [[Unit of measurement|units]]. It is a form of a [[t-statistic|Student's ''t''-statistic]], with the estimate of error varying between points. This is an important technique in the detection of [[outlier]]s. It is among several named in honor of [[William Sealey Gosset]], who wrote under the pseudonym "Student" (e.g., [[Student's distribution]]). Dividing a statistic by a [[sample standard deviation]] is called '''''studentizing''''', in analogy with ''[[standardizing]]'' and ''[[normalization (statistics)|normalizing]]''. ==Motivation== {{see also|Errors and residuals in statistics}} The key reason for studentizing is that, in [[regression analysis]] of a [[multivariate distribution]], the variances of the ''residuals'' at different input variable values may differ, even if the variances of the ''errors'' at these different input variable values are equal. The issue is the difference between [[errors and residuals in statistics]], particularly the behavior of residuals in regressions. Consider the [[simple linear regression]] model :<math> Y = \alpha_0 + \alpha_1 X + \varepsilon. \, </math> Given a random sample (''X''<sub>''i''</sub>, ''Y''<sub>''i''</sub>), ''i'' = 1, ..., ''n'', each pair (''X''<sub>''i''</sub>, ''Y''<sub>''i''</sub>) satisfies :<math> Y_i = \alpha_0 + \alpha_1 X_i + \varepsilon_i,\,</math> where the ''errors'' <math>\varepsilon_i</math>, are [[statistical independence|independent]] and all have the same variance <math>\sigma^2</math>. The '''residuals''' are not the true errors, but ''estimates'', based on the observable data. When the method of least squares is used to estimate <math>\alpha_0</math> and <math>\alpha_1</math>, then the residuals <math>\widehat{\varepsilon\,}</math>, unlike the errors <math>\varepsilon</math>, cannot be independent since they satisfy the two constraints :<math>\sum_{i=1}^n \widehat{\varepsilon\,}_i=0</math> and :<math>\sum_{i=1}^n \widehat{\varepsilon\,}_i x_i=0.</math> (Here ''Ξ΅''<sub>''i''</sub> is the ''i''th error, and <math>\widehat{\varepsilon\,}_i</math> is the ''i''th residual.) The residuals, unlike the errors, ''do not all have the same variance:'' the variance decreases as the corresponding ''x''-value gets farther from the average ''x''-value. This is not a feature of the data itself, but of the regression better fitting values at the ends of the domain. It is also reflected in the [[Influence function (statistics)|influence functions]] of various data points on the [[regression coefficient]]s: endpoints have more influence. This can also be seen because the residuals at endpoints depend greatly on the slope of a fitted line, while the residuals at the middle are relatively insensitive to the slope. The fact that ''the variances of the residuals differ,'' even though ''the variances of the true errors are all equal'' to each other, is the ''principal reason'' for the need for studentization. It is not simply a matter of the population parameters (mean and standard deviation) being unknown β it is that ''regressions'' yield ''different residual distributions'' at ''different data points,'' unlike ''point [[estimators]]'' of [[univariate distribution]]s, which share a ''common distribution'' for residuals. ==Background== For this simple model, the [[design matrix]] is :<math>X=\left[\begin{matrix}1 & x_1 \\ \vdots & \vdots \\ 1 & x_n \end{matrix}\right]</math> and the [[hat matrix]] ''H'' is the matrix of the [[orthogonal projection]] onto the column space of the design matrix: :<math>H=X(X^T X)^{-1}X^T.\,</math> The [[Leverage (statistics)|leverage]] ''h''<sub>''ii''</sub> is the ''i''th diagonal entry in the hat matrix. The variance of the ''i''th residual is :<math>\operatorname{var}(\widehat{\varepsilon\,}_i)=\sigma^2(1-h_{ii}).</math> In case the design matrix ''X'' has only two columns (as in the example above), this is equal to :<math>\operatorname{var}(\widehat{\varepsilon\,}_i)=\sigma^2\left( 1 - \frac1n -\frac{(x_i-\bar x)^2}{\sum_{j=1}^n (x_j - \bar x)^2 } \right). </math> In the case of an [[arithmetic mean]], the design matrix ''X'' has only one column (a [[vector of ones]]), and this is simply: :<math>\operatorname{var}(\widehat{\varepsilon\,}_i)=\sigma^2\left( 1 - \frac1n \right). </math> ==Calculation== Given the definitions above, the '''Studentized residual''' is then :<math>t_i = {\widehat{\varepsilon\,}_i\over \widehat{\sigma} \sqrt{1-h_{ii}\ }}</math> where ''h''<sub>''ii''</sub> is the [[Leverage (statistics)|leverage]], and <math>\widehat{\sigma}</math> is an appropriate estimate of ''Ο'' (see below). In the case of a mean, this is equal to: :<math>t_i = {\widehat{\varepsilon\,}_i\over \widehat{\sigma} \sqrt{(n-1)/n}}</math> ==Internal and external studentization== The usual estimate of ''Ο''<sup>2</sup> is the ''internally studentized'' residual :<math>\widehat{\sigma}^2={1 \over n-m}\sum_{j=1}^n \widehat{\varepsilon\,}_j^{\,2}.</math> where ''m'' is the number of parameters in the model (2 in our example). But if the ''i'' th case is suspected of being improbably large, then it would also not be normally distributed. Hence it is prudent to exclude the ''i'' th observation from the process of estimating the variance when one is considering whether the ''i'' th case may be an outlier, and instead use the ''externally studentized'' residual, which is :<math>\widehat{\sigma}_{(i)}^2={1 \over n-m-1}\sum_{\begin{smallmatrix}j = 1\\j \ne i\end{smallmatrix}}^n \widehat{\varepsilon\,}_j^{\,2},</math> based on all the residuals ''except'' the suspect ''i'' th residual. Here is to emphasize that <math>\widehat{\varepsilon\,}_j^{\,2} (j \ne i)</math> for suspect ''i'' are computed with ''i'' th case excluded. If the estimate ''Ο''<sup>2</sup> ''includes'' the ''i'' th case, then it is called the '''''internally studentized''''' residual, <math>t_i</math> (also known as the ''standardized residual'' <ref>[https://stat.ethz.ch/R-manual/R-devel/library/stats/html/influence.measures.html Regression Deletion Diagnostics] R docs</ref>). If the estimate <math>\widehat{\sigma}_{(i)}^2</math> is used instead, ''excluding'' the ''i'' th case, then it is called the '''''externally studentized''''', <math>t_{i(i)}</math>. ==Distribution== {{distinguish-redirect|Tau distribution|Tau coefficient}} If the errors are independent and [[normal distribution|normally distributed]] with [[expected value]] 0 and variance ''Ο''<sup>2</sup>, then the [[probability distribution]] of the ''i''th externally studentized residual <math>t_{i(i)}</math> is a [[Student's t-distribution]] with ''n'' β ''m'' β 1 [[degrees of freedom (statistics)|degrees of freedom]], and can range from <math>\scriptstyle-\infty</math> to <math>\scriptstyle+\infty</math>. On the other hand, the internally studentized residuals are in the range <math> 0 \,\pm\, \sqrt{\nu}</math>, where ''Ξ½'' = ''n'' β ''m'' is the number of residual degrees of freedom. If ''t''<sub>''i''</sub> represents the internally studentized residual, and again assuming that the errors are independent identically distributed Gaussian variables, then:<ref name=NOAA>Allen J. Pope (1976), "The statistics of residuals and the detection of outliers", U.S. Dept. of Commerce, National Oceanic and Atmospheric Administration, National Ocean Survey, Geodetic Research and Development Laboratory, 136 pages, [http://www.ngs.noaa.gov/PUBS_LIB/TRNOS65NGS1.pdf], eq.(6)</ref> :<math>t_i \sim \sqrt{\nu} {t \over \sqrt{t^2+\nu-1}}</math> where ''t'' is a random variable distributed as [[Student's t-distribution]] with ''Ξ½'' β 1 degrees of freedom. In fact, this implies that ''t''<sub>''i''</sub><sup>2</sup> /''Ξ½'' follows the [[beta distribution]] ''B''(1/2,(''Ξ½'' β 1)/2). The distribution above is sometimes referred to as the '''tau distribution''';<ref name=NOAA/> it was first derived by Thompson in 1935.<ref name=Thompson>{{cite journal|last1=Thompson|first1=William R.|title=On a Criterion for the Rejection of Observations and the Distribution of the Ratio of Deviation to Sample Standard Deviation|journal=The Annals of Mathematical Statistics|date=1935|volume=6|issue=4|pages=214β219|doi=10.1214/aoms/1177732567|doi-access=free}}</ref> When ''Ξ½'' = 3, the internally studentized residuals are [[uniform distribution (continuous)|uniformly distributed]] between <math>\scriptstyle-\sqrt{3}</math> and <math>\scriptstyle+\sqrt{3}</math>. If there is only one residual degree of freedom, the above formula for the distribution of internally studentized residuals doesn't apply. In this case, the ''t''<sub>''i''</sub> are all either +1 or β1, with 50% chance for each. The standard deviation of the distribution of internally studentized residuals is always 1, but this does not imply that the standard deviation of all the ''t''<sub>''i''</sub> of a particular experiment is 1. For instance, the internally studentized residuals when fitting a straight line going through (0, 0) to the points (1, 4), (2, β1), (2, β1) are <math>\sqrt{2},\ -\sqrt{5}/5,\ -\sqrt{5}/5</math>, and the standard deviation of these is not 1. Note that any pair of studentized residual ''t''<sub>''i''</sub> and ''t''<sub>''j''</sub> (where {{nowrap|<math>i \neq j</math>),}} are NOT i.i.d. They have the same distribution, but are not independent due to constraints on the residuals having to sum to 0 and to have them be orthogonal to the design matrix. ==Software implementations== Many programs and statistics packages, such as [[R (programming language)|R]], [[Python (programming language)|Python]], etc., include implementations of Studentized residual. {| class="wikitable sortable" |- ! Language/Program !! Function !! Notes |- | [[R (programming language)|R]] || <code>rstandard(model, ...)</code> || internally studentized. See [https://stat.ethz.ch/R-manual/R-devel/library/stats/html/influence.measures.html] |- | [[R (programming language)|R]] || <code>rstudent(model, ...)</code> || externally studentized. See [https://stat.ethz.ch/R-manual/R-devel/library/stats/html/influence.measures.html] |} == See also == * [[Cook's distance]] β a measure of changes in regression coefficients when an observation is deleted * [[Grubbs's test]] * [[Normalization (statistics)]] * [[Samuelson's inequality]] * [[Standard score]] * [[William Sealy Gosset]] ==References== {{Reflist}} ==Further reading== * {{cite book |last1=Cook |first1=R. Dennis |last2=Weisberg |first2=Sanford |title=Residuals and Influence in Regression. |year=1982 |publisher=[[Chapman and Hall]] |location=New York |isbn=041224280X |url=http://www.stat.umn.edu/rir/ |edition=Repr. |accessdate=23 February 2013}} {{DEFAULTSORT:Studentized Residual}} [[Category:Statistical outliers]] [[Category:Statistical deviation and dispersion]] [[Category:Errors and residuals]] [[Category:Statistical ratios]] [[Category:Regression diagnostics]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Broader
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Distinguish-redirect
(
edit
)
Template:Multiple issues
(
edit
)
Template:Nowrap
(
edit
)
Template:Reflist
(
edit
)
Template:Regression bar
(
edit
)
Template:See also
(
edit
)
Template:Short description
(
edit
)