Singular value
Template:Short description In mathematics, in particular functional analysis, the singular values of a compact operator <math>T: X \rightarrow Y</math> acting between Hilbert spaces <math>X</math> and <math>Y</math>, are the square roots of the (necessarily non-negative) eigenvalues of the self-adjoint operator <math>T^*T</math> (where <math>T^*</math> denotes the adjoint of <math>T</math>).
The singular values are non-negative real numbers, usually listed in decreasing order (σ1(T), σ2(T), …). The largest singular value σ1(T) is equal to the operator norm of T (see Min-max theorem).
If T acts on Euclidean space <math>\Reals ^n</math>, there is a simple geometric interpretation for the singular values: Consider the image by <math>T</math> of the unit sphere; this is an ellipsoid, and the lengths of its semi-axes are the singular values of <math>T</math> (the figure provides an example in <math>\Reals^2</math>).
The singular values are the absolute values of the eigenvalues of a normal matrix A, because the spectral theorem can be applied to obtain unitary diagonalization of <math>A</math> as <math>A = U\Lambda U^*</math>. Therefore, Template:Nowrap
Most norms on Hilbert space operators studied are defined using singular values. For example, the Ky Fan-k-norm is the sum of first k singular values, the trace norm is the sum of all singular values, and the Schatten norm is the pth root of the sum of the pth powers of the singular values. Note that each norm is defined only on a special class of operators, hence singular values can be useful in classifying different operators.
In the finite-dimensional case, a matrix can always be decomposed in the form <math>\mathbf{U\Sigma V^*}</math>, where <math>\mathbf{U}</math> and <math>\mathbf{V^*}</math> are unitary matrices and <math>\mathbf{\Sigma}</math> is a rectangular diagonal matrix with the singular values lying on the diagonal. This is the singular value decomposition.
Basic propertiesEdit
For <math>A \in \mathbb{C}^{m \times n}</math>, and <math>i = 1,2, \ldots, \min \{m,n\}</math>.
Min-max theorem for singular values. Here <math>U: \dim(U) = i</math> is a subspace of <math>\mathbb{C}^n</math> of dimension <math>i</math>.
- <math>\begin{align}
\sigma_i(A) &= \min_{\dim(U)=n-i+1} \max_{\underset{\| x \|_2 = 1}{x \in U}} \left\| Ax \right\|_2. \\ \sigma_i(A) &= \max_{\dim(U)=i} \min_{\underset{\| x \|_2 = 1}{x \in U}} \left\| Ax \right\|_2.
\end{align}</math>
Matrix transpose and conjugate do not alter singular values.
- <math>\sigma_i(A) = \sigma_i\left(A^\textsf{T}\right) = \sigma_i\left(A^*\right).</math>
For any unitary <math>U \in \mathbb{C}^{m \times m}, V \in \mathbb{C}^{n \times n}.</math>
- <math>\sigma_i(A) = \sigma_i(UAV).</math>
Relation to eigenvalues:
- <math>\sigma_i^2(A) = \lambda_i\left(AA^*\right) = \lambda_i\left(A^*A\right).</math>
Relation to trace:
- <math>\sum_{i=1}^n \sigma_i^2=\text{tr}\ A^\ast A</math>.
If <math>A^* A</math> is full rank, the product of singular values is <math>\det \sqrt{A^* A}</math>.
If <math>A A^*</math> is full rank, the product of singular values is <math>\det\sqrt{ A A^*}</math>.
If <math>A</math> is square and full rank, the product of singular values is <math>|\det A|</math>.
If <math>A</math> is normal, then <math>\sigma(A) = |\lambda(A)|</math>, that is, its singular values are the absolute values of its eigenvalues.
For a generic rectangular matrix <math>A</math>, let <math display="inline">\tilde{A} = \begin{bmatrix} 0 & A \\ A^* & 0 \end{bmatrix}</math> be its augmented matrix. It has eigenvalues <math display="inline">\pm \sigma(A)</math> (where <math display="inline">\sigma(A)</math> are the singular values of <math display="inline">A</math>) and the remaining eigenvalues are zero. Let <math display="inline">A = U\Sigma V^*</math> be the singular value decomposition, then the eigenvectors of <math display="inline">\tilde{A}</math> are <math display="inline">\begin{bmatrix} \mathbf{u}_i \\ \pm\mathbf{v}_i \end{bmatrix}</math> for <math>\pm \sigma_i</math><ref>Template:Cite book</ref>Template:Pg
The smallest singular valueEdit
The smallest singular value of a matrix A is σn(A). It has the following properties for a non-singular matrix A:
- The 2-norm of the inverse matrix A−1 equals the inverse σn−1(A).<ref name=":0">Template:Cite book</ref>Template:Rp
- The absolute values of all elements in the inverse matrix A−1 are at most the inverse σn−1(A).<ref name=":0" />Template:Rp
Intuitively, if σn(A) is small, then the rows of A are "almost" linearly dependent. If it is σn(A) = 0, then the rows of A are linearly dependent and A is not invertible.
Inequalities about singular valuesEdit
See also.<ref>R. A. Horn and C. R. Johnson. Topics in Matrix Analysis. Cambridge University Press, Cambridge, 1991. Chap. 3</ref>
Singular values of sub-matricesEdit
For <math>A \in \mathbb{C}^{m \times n}.</math>
- Let <math>B</math> denote <math>A</math> with one of its rows or columns deleted. Then <math display="block">\sigma_{i+1}(A) \leq \sigma_i (B) \leq \sigma_i(A)</math>
- Let <math>B</math> denote <math>A</math> with one of its rows and columns deleted. Then <math display="block">\sigma_{i+2}(A) \leq \sigma_i (B) \leq \sigma_i(A)</math>
- Let <math>B</math> denote an <math>(m-k)\times(n-\ell)</math> submatrix of <math>A</math>. Then <math display="block">\sigma_{i+k+\ell}(A) \leq \sigma_i (B) \leq \sigma_i(A)</math>
Singular values of A + BEdit
For <math>A, B \in \mathbb{C}^{m \times n}</math>
- <math display="block">\sum_{i=1}^{k} \sigma_i(A + B) \leq \sum_{i=1}^{k} (\sigma_i(A) + \sigma_i(B)), \quad k=\min \{m,n\}</math>
- <math display="block">\sigma_{i+j-1}(A + B) \leq \sigma_i(A) + \sigma_j(B). \quad i,j\in\mathbb{N},\ i + j - 1 \leq \min \{m,n\}</math>
Singular values of ABEdit
For <math>A, B \in \mathbb{C}^{n \times n}</math>
- <math display="block">\begin{align}
\prod_{i=n}^{i=n-k+1} \sigma_i(A) \sigma_i(B) &\leq \prod_{i=n}^{i=n-k+1} \sigma_i(AB) \\ \prod_{i=1}^k \sigma_i(AB) &\leq \prod_{i=1}^k \sigma_i(A) \sigma_i(B), \\ \sum_{i=1}^k \sigma_i^p(AB) &\leq \sum_{i=1}^k \sigma_i^p(A) \sigma_i^p(B),
\end{align}</math>
- <math display="block">\sigma_n(A) \sigma_i(B) \leq \sigma_i (AB) \leq \sigma_1(A) \sigma_i(B) \quad i = 1, 2, \ldots, n. </math>
For <math>A, B \in \mathbb{C}^{m \times n}</math><ref>X. Zhan. Matrix Inequalities. Springer-Verlag, Berlin, Heidelberg, 2002. p.28</ref> <math display="block">2 \sigma_i(A B^*) \leq \sigma_i \left(A^* A + B^* B\right), \quad i = 1, 2, \ldots, n. </math>
Singular values and eigenvaluesEdit
For <math>A \in \mathbb{C}^{n \times n}</math>.
- See<ref>R. Bhatia. Matrix Analysis. Springer-Verlag, New York, 1997. Prop. III.5.1</ref> <math display="block">\lambda_i \left(A + A^*\right) \leq 2 \sigma_i(A), \quad i = 1, 2, \ldots, n.</math>
- Assume <math>\left|\lambda_1(A)\right| \geq \cdots \geq \left|\lambda_n(A)\right|</math>. Then for <math>k = 1, 2, \ldots, n</math>:
- Weyl's theorem <math display="block"> \prod_{i=1}^k \left|\lambda_i(A)\right| \leq \prod_{i=1}^{k} \sigma_i(A).</math>
- For <math>p>0</math>. <math display="block"> \sum_{i=1}^k \left|\lambda_i^p(A)\right| \leq \sum_{i=1}^{k} \sigma_i^p(A).</math>
HistoryEdit
This concept was introduced by Erhard Schmidt in 1907. Schmidt called singular values "eigenvalues" at that time. The name "singular value" was first quoted by Smithies in 1937. In 1957, Allahverdiev proved the following characterization of the nth singular number:<ref>I. C. Gohberg and M. G. Krein. Introduction to the Theory of Linear Non-selfadjoint Operators. American Mathematical Society, Providence, R.I.,1969. Translated from the Russian by A. Feinstein. Translations of Mathematical Monographs, Vol. 18.</ref>
- <math>\sigma_n(T) = \inf\big\{\, \|T-L\| : L\text{ is an operator of finite rank }<n \,\big\}.</math>
This formulation made it possible to extend the notion of singular values to operators in Banach space. Note that there is a more general concept of s-numbers, which also includes Gelfand and Kolmogorov width.
See alsoEdit
- Condition number
- Cauchy interlacing theorem or Poincaré separation theorem
- Schur–Horn theorem
- Singular value decomposition