Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Singular value
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Square roots of the eigenvalues of the self-adjoint operator}} In [[mathematics]], in particular [[functional analysis]], the '''singular values''' of a [[compact operator]] <math>T: X \rightarrow Y</math> acting between [[Hilbert space]]s <math>X</math> and <math>Y</math>, are the square roots of the (necessarily non-negative) [[eigenvalue]]s of the self-adjoint operator <math>T^*T</math> (where <math>T^*</math> denotes the [[adjoint operator|adjoint]] of <math>T</math>). The singular values are non-negative [[real number]]s, usually listed in decreasing order (''σ''<sub>1</sub>(''T''), ''σ''<sub>2</sub>(''T''), …). The largest singular value ''σ''<sub>1</sub>(''T'') is equal to the [[operator norm]] of ''T'' (see [[Min-max theorem#Min-max principle for singular values|Min-max theorem]]). [[File:Singular value decomposition.gif|thumb|right|280px|Visualization of a [[singular value decomposition]] (SVD) of a 2-dimensional, real [[:en:Shear mapping|shearing matrix]] ''M''. First, we see the [[unit disc]] in blue together with the two [[standard basis|canonical unit vectors]]. We then see the action of ''M'', which distorts the disc to an [[ellipse]]. The SVD decomposes ''M'' into three simple transformations: a [[rotation matrix|rotation]] ''V''{{sup|*}}, a [[scaling (geometry)|scaling]] Σ along the rotated coordinate axes and a second rotation ''U''. Σ is a (square, in this example) [[diagonal matrix]] containing in its diagonal the singular values of ''M'', which represent the lengths ''σ''<sub>1</sub> and ''σ''<sub>2</sub> of the [[ellipse#Elements of an ellipse|semi-axes]] of the ellipse.]] If ''T'' acts on Euclidean space <math>\Reals ^n</math>, there is a simple geometric interpretation for the singular values: Consider the image by <math>T</math> of the [[N-sphere|unit sphere]]; this is an [[ellipsoid]], and the lengths of its semi-axes are the singular values of <math>T</math> (the figure provides an example in <math>\Reals^2</math>). The singular values are the absolute values of the [[eigenvalues]] of a [[normal matrix]] ''A'', because the [[spectral theorem]] can be applied to obtain unitary diagonalization of <math>A</math> as <math>A = U\Lambda U^*</math>. Therefore, {{nowrap|<math display="inline">\sqrt{A^* A} = \sqrt{U \Lambda^* \Lambda U^*} = U \left| \Lambda \right| U^*</math>.}} Most [[normed linear space|norms]] on Hilbert space operators studied are defined using singular values. For example, the [[Ky Fan]]-''k''-norm is the sum of first ''k'' singular values, the trace norm is the sum of all singular values, and the [[Schatten norm]] is the ''p''th root of the sum of the ''p''th powers of the singular values. Note that each norm is defined only on a special class of operators, hence singular values can be useful in classifying different operators. In the finite-dimensional case, a [[matrix (mathematics)|matrix]] can always be decomposed in the form <math>\mathbf{U\Sigma V^*}</math>, where <math>\mathbf{U}</math> and <math>\mathbf{V^*}</math> are [[unitary matrix|unitary matrices]] and <math>\mathbf{\Sigma}</math> is a [[rectangular diagonal matrix]] with the singular values lying on the diagonal. This is the [[singular value decomposition]]. == Basic properties == For <math>A \in \mathbb{C}^{m \times n}</math>, and <math>i = 1,2, \ldots, \min \{m,n\}</math>. [[Min-max theorem#Min-max principle for singular values|Min-max theorem for singular values]]. Here <math>U: \dim(U) = i</math> is a subspace of <math>\mathbb{C}^n</math> of dimension <math>i</math>. :<math>\begin{align} \sigma_i(A) &= \min_{\dim(U)=n-i+1} \max_{\underset{\| x \|_2 = 1}{x \in U}} \left\| Ax \right\|_2. \\ \sigma_i(A) &= \max_{\dim(U)=i} \min_{\underset{\| x \|_2 = 1}{x \in U}} \left\| Ax \right\|_2. \end{align}</math> Matrix transpose and conjugate do not alter singular values. :<math>\sigma_i(A) = \sigma_i\left(A^\textsf{T}\right) = \sigma_i\left(A^*\right).</math> For any unitary <math>U \in \mathbb{C}^{m \times m}, V \in \mathbb{C}^{n \times n}.</math> :<math>\sigma_i(A) = \sigma_i(UAV).</math> Relation to eigenvalues: :<math>\sigma_i^2(A) = \lambda_i\left(AA^*\right) = \lambda_i\left(A^*A\right).</math> Relation to [[Trace (linear algebra)|trace]]: :<math>\sum_{i=1}^n \sigma_i^2=\text{tr}\ A^\ast A</math>. If <math>A^* A</math> is full rank, the product of singular values is <math>\det \sqrt{A^* A}</math>. If <math>A A^*</math> is full rank, the product of singular values is <math>\det\sqrt{ A A^*}</math>. If <math>A</math> is square and full rank, the product of singular values is <math>|\det A|</math>. If <math>A</math> is [[Normal matrix|normal]], then <math>\sigma(A) = |\lambda(A)|</math>, that is, its singular values are the absolute values of its eigenvalues. For a generic rectangular matrix <math>A</math>, let <math display="inline">\tilde{A} = \begin{bmatrix} 0 & A \\ A^* & 0 \end{bmatrix}</math> be its augmented matrix. It has eigenvalues <math display="inline">\pm \sigma(A)</math> (where <math display="inline">\sigma(A)</math> are the singular values of <math display="inline">A</math>) and the remaining eigenvalues are zero. Let <math display="inline">A = U\Sigma V^*</math> be the singular value decomposition, then the eigenvectors of <math display="inline">\tilde{A}</math> are <math display="inline">\begin{bmatrix} \mathbf{u}_i \\ \pm\mathbf{v}_i \end{bmatrix}</math> for <math>\pm \sigma_i</math><ref>{{Cite book |last=Tao |first=Terence |title=Topics in random matrix theory |date=2012 |publisher=American Mathematical Society |isbn=978-0-8218-7430-1 |series=Graduate studies in mathematics |location=Providence, R.I}}</ref>{{Pg|page=52}} == The smallest singular value == The smallest singular value of a matrix ''A'' is ''σ''<sub>n</sub>(''A''). It has the following properties for a non-singular matrix A: * The 2-norm of the inverse matrix A<sup>−1</sup> equals the inverse ''σ''<sub>n</sub><sup>−1</sup>(''A'').<ref name=":0">{{Cite book |last=Demmel |first=James W. |url=http://epubs.siam.org/doi/book/10.1137/1.9781611971446 |title=Applied Numerical Linear Algebra |date=January 1997 |publisher=Society for Industrial and Applied Mathematics |isbn=978-0-89871-389-3 |language=en |doi=10.1137/1.9781611971446}}</ref>{{Rp|location=Thm.3.3}} * The absolute values of all elements in the inverse matrix A<sup>−1</sup> are at most the inverse ''σ''<sub>n</sub><sup>−1</sup>(''A'').<ref name=":0" />{{Rp|location=Thm.3.3}} Intuitively, if ''σ''<sub>n</sub>(''A'') is small, then the rows of A are "almost" linearly dependent. If it is ''σ''<sub>n</sub>(''A'') = 0, then the rows of A are linearly dependent and A is not invertible. == Inequalities about singular values == See also.<ref>[[Roger Horn|R. A. Horn]] and [[Charles Royal Johnson|C. R. Johnson]]. Topics in Matrix Analysis. Cambridge University Press, Cambridge, 1991. Chap. 3</ref> ===Singular values of sub-matrices=== For <math>A \in \mathbb{C}^{m \times n}.</math> # Let <math>B</math> denote <math>A</math> with one of its rows ''or'' columns deleted. Then <math display="block">\sigma_{i+1}(A) \leq \sigma_i (B) \leq \sigma_i(A)</math> # Let <math>B</math> denote <math>A</math> with one of its rows ''and'' columns deleted. Then <math display="block">\sigma_{i+2}(A) \leq \sigma_i (B) \leq \sigma_i(A)</math> # Let <math>B</math> denote an <math>(m-k)\times(n-\ell)</math> submatrix of <math>A</math>. Then <math display="block">\sigma_{i+k+\ell}(A) \leq \sigma_i (B) \leq \sigma_i(A)</math> ===Singular values of ''A'' + ''B''=== For <math>A, B \in \mathbb{C}^{m \times n}</math> # <math display="block">\sum_{i=1}^{k} \sigma_i(A + B) \leq \sum_{i=1}^{k} (\sigma_i(A) + \sigma_i(B)), \quad k=\min \{m,n\}</math> # <math display="block">\sigma_{i+j-1}(A + B) \leq \sigma_i(A) + \sigma_j(B). \quad i,j\in\mathbb{N},\ i + j - 1 \leq \min \{m,n\}</math> ===Singular values of ''AB''=== For <math>A, B \in \mathbb{C}^{n \times n}</math> # <math display="block">\begin{align} \prod_{i=n}^{i=n-k+1} \sigma_i(A) \sigma_i(B) &\leq \prod_{i=n}^{i=n-k+1} \sigma_i(AB) \\ \prod_{i=1}^k \sigma_i(AB) &\leq \prod_{i=1}^k \sigma_i(A) \sigma_i(B), \\ \sum_{i=1}^k \sigma_i^p(AB) &\leq \sum_{i=1}^k \sigma_i^p(A) \sigma_i^p(B), \end{align}</math> # <math display="block">\sigma_n(A) \sigma_i(B) \leq \sigma_i (AB) \leq \sigma_1(A) \sigma_i(B) \quad i = 1, 2, \ldots, n. </math> For <math>A, B \in \mathbb{C}^{m \times n}</math><ref>X. Zhan. Matrix Inequalities. Springer-Verlag, Berlin, Heidelberg, 2002. p.28</ref> <math display="block">2 \sigma_i(A B^*) \leq \sigma_i \left(A^* A + B^* B\right), \quad i = 1, 2, \ldots, n. </math> ===Singular values and eigenvalues=== For <math>A \in \mathbb{C}^{n \times n}</math>. # See<ref>R. Bhatia. Matrix Analysis. Springer-Verlag, New York, 1997. Prop. III.5.1</ref> <math display="block">\lambda_i \left(A + A^*\right) \leq 2 \sigma_i(A), \quad i = 1, 2, \ldots, n.</math> # Assume <math>\left|\lambda_1(A)\right| \geq \cdots \geq \left|\lambda_n(A)\right|</math>. Then for <math>k = 1, 2, \ldots, n</math>: ## [[Weyl's inequality#Weyl's inequality in matrix theory|Weyl's theorem]] <math display="block"> \prod_{i=1}^k \left|\lambda_i(A)\right| \leq \prod_{i=1}^{k} \sigma_i(A).</math> ## For <math>p>0</math>. <math display="block"> \sum_{i=1}^k \left|\lambda_i^p(A)\right| \leq \sum_{i=1}^{k} \sigma_i^p(A).</math> == History == This concept was introduced by [[Erhard Schmidt]] in 1907. Schmidt called singular values "eigenvalues" at that time. The name "singular value" was first quoted by Smithies in 1937. In 1957, Allahverdiev proved the following characterization of the ''n''th singular number:<ref>[[Israel Gohberg|I. C. Gohberg]] and [[Mark Krein|M. G. Krein]]. Introduction to the Theory of Linear Non-selfadjoint Operators. American Mathematical Society, Providence, R.I.,1969. Translated from the Russian by A. Feinstein. Translations of Mathematical Monographs, Vol. 18.</ref> : <math>\sigma_n(T) = \inf\big\{\, \|T-L\| : L\text{ is an operator of finite rank }<n \,\big\}.</math> This formulation made it possible to extend the notion of singular values to operators in [[Banach space]]. Note that there is a more general concept of ''[[s-numbers]]'', which also includes Gelfand and Kolmogorov width. == See also == *[[Condition number]] *[[Min-max theorem#Cauchy interlacing theorem|Cauchy interlacing theorem]] or [[Poincaré separation theorem]] *[[Schur–Horn theorem]] *[[Singular value decomposition]] ==References== {{Reflist}} [[Category:Operator theory]] [[Category:Singular value decomposition]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite book
(
edit
)
Template:Nowrap
(
edit
)
Template:Pg
(
edit
)
Template:Reflist
(
edit
)
Template:Rp
(
edit
)
Template:Short description
(
edit
)
Template:Sup
(
edit
)