Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Standard deviation
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Standard deviation matrix == The standard deviation matrix <math>\mathbf{S}</math> is the extension of the standard deviation to multiple dimensions. It is the symmetric square root of the covariance matrix <math>\mathbf{\Sigma}</math>.<ref name="Das">{{cite arXiv |eprint=2012.14331 |last1=Das |first1=Abhranil |author2=Wilson S Geisler |title=Methods to integrate multinormals and compute classification measures |date=2020 |class=stat.ML }}</ref> <math>\mathbf{S}</math> linearly scales a random vector in multiple dimensions in the same way that <math>\sigma</math> does in one dimension. A scalar random variable <math>x</math> with variance <math>\sigma^2</math> can be written as <math>x=\sigma z</math>, where <math>z</math> has unit variance. In the same way, a random vector <math>\boldsymbol{x}</math> in several dimensions with covariance <math>\mathbf{\Sigma}</math> can be written as <math>\boldsymbol{x}=\mathbf{S}\boldsymbol{z}</math>, where <math>\boldsymbol{z}</math> is a normalized variable with identity covariance <math>\mathbf{1}</math>. This requires that <math>\mathbf{S}\mathbf{S'} = \mathbf{\Sigma}</math>. There are then infinite solutions for <math>\mathbf{S}</math>, and consequently there are multiple ways to whiten the distribution.<ref name="kessy">{{cite journal|last1=Kessy|first1=A.|last2=Lewin|first2=A.|last3=Strimmer|first3=K.|title=Optimal whitening and decorrelation|year=2018|journal=The American Statistician| volume=72|issue=4| pages=309β314|doi=10.1080/00031305.2016.1277159|arxiv=1512.00809|s2cid=55075085 }}</ref> The symmetric square root of <math>\mathbf{\Sigma}</math> is one of the solutions. For example, a multivariate normal vector <math>\boldsymbol{x} \sim N(\boldsymbol{\mu}, \mathbf{\Sigma})</math> can be defined as <math>\boldsymbol{x}=\mathbf{S}\boldsymbol{z}+\boldsymbol{\mu}</math>, where <math>\boldsymbol{z} \sim N(\boldsymbol{0}, \mathbf{1})</math> is the multivariate standard normal.<ref name="Das"/> === Properties === * The eigenvectors and eigenvalues of <math>\mathbf{S}</math> correspond to the axes of the 1 sd error ellipsoid of the multivariate normal distribution. See ''[[Multivariate normal distribution#Geometric interpretation|Multivariate normal distribution: geometric interpretation]]''.[[File:MultivariateNormal.png|thumb|The standard deviation ellipse (green) of a two-dimensional normal distribution]] * The standard deviation of the ''projection'' of the multivariate distribution (i.e. the marginal distribution) on to a line in the direction of the unit vector <math>\hat{\boldsymbol{\eta}}</math> equals <math>\sqrt{\hat{\boldsymbol{\eta}}' \mathbf{\Sigma} \hat{\boldsymbol{\eta}}} = \lVert \mathbf{S} \hat{\boldsymbol{\eta}} \rVert</math>.<ref name="Das"/> * The standard deviation of a ''slice'' of the multivariate distribution (i.e. the conditional distribution) along the line in the direction of the unit vector <math>\hat{\boldsymbol{\eta}}</math> equals <math>\frac{1}{\lVert \mathbf{S}^{-1}\hat{\boldsymbol{\eta}} \rVert}</math>.<ref name="Das"/> * The [[Sensitivity index | discriminability index]] between two equal-covariance distributions is their [[Mahalanobis distance]], which can also be expressed in terms of the sd matrix: <math>d'=\sqrt{(\boldsymbol{\mu}_a-\boldsymbol{\mu}_b)'\boldsymbol{\Sigma}^{-1}(\boldsymbol{\mu}_a-\boldsymbol{\mu}_b)} = \lVert \mathbf{S}^{-1}\boldsymbol{d} \rVert</math>, where <math>\boldsymbol{d}=\boldsymbol{\mu}_a-\boldsymbol{\mu}_b</math> is the mean-difference vector.<ref name="Das"/> * Since <math>\mathbf{S}</math> scales a normalized variable, it can be used to invert the transformation, and make it decorrelated and unit-variance: <math>\boldsymbol{z}=\mathbf{S}^{-1} (\boldsymbol{x}-\boldsymbol{\mu})</math> has zero mean and identity covariance. This is called the [[Whitening transformation|Mahalanobis whitening transform]].
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)