Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Singular value decomposition
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Variations and generalizations == === Scale-invariant SVD === The singular values of a matrix {{tmath|\mathbf A}} are uniquely defined and are invariant with respect to left and/or right unitary transformations of {{tmath|\mathbf A.}} In other words, the singular values of {{tmath|\mathbf U \mathbf A \mathbf V,}} for unitary matrices {{tmath|\mathbf U}} and {{tmath|\mathbf V,}} are equal to the singular values of {{tmath|\mathbf A.}} This is an important property for applications in which it is necessary to preserve Euclidean distances and invariance with respect to rotations. The Scale-Invariant SVD, or SI-SVD,<ref>{{citation|last=Uhlmann |first=Jeffrey |author-link=Jeffrey Uhlmann |title=A Generalized Matrix Inverse that is Consistent with Respect to Diagonal Transformations |series=SIAM Journal on Matrix Analysis |year=2018 |volume=239 |issue=2 |pages=781β800 |url=http://faculty.missouri.edu/uhlmannj/UC-SIMAX-Final.pdf |url-status=dead |archive-url=https://web.archive.org/web/20190617095052id_/http://faculty.missouri.edu/uhlmannj/UC-SIMAX-Final.pdf |archive-date= 2019-06-17}}</ref> is analogous to the conventional SVD except that its uniquely-determined singular values are invariant with respect to diagonal transformations of {{tmath|\mathbf A.}} In other words, the singular values of {{tmath|\mathbf D \mathbf A \mathbf E,}} for invertible diagonal matrices {{tmath|\mathbf D}} and {{tmath|\mathbf E,}} are equal to the singular values of {{tmath|\mathbf A.}} This is an important property for applications for which invariance to the choice of units on variables (e.g., metric versus imperial units) is needed. === Bounded operators on Hilbert spaces === The factorization {{tmath|\mathbf M {{=}} \mathbf U \mathbf \Sigma \mathbf V^*}} can be extended to a [[bounded operator]] {{tmath|\mathbf M}} on a separable Hilbert space {{tmath|H.}} Namely, for any bounded operator {{tmath|\mathbf M,}} there exist a [[partial isometry]] {{tmath|\mathbf U,}} a unitary {{tmath|\mathbf V,}} a measure space {{tmath|(X, \mu),}} and a non-negative measurable {{tmath|f}} such that <math display=block> \mathbf{M} = \mathbf{U} T_f \mathbf{V}^* </math> where {{tmath|T_f}} is the [[multiplication operator|multiplication by {{tmath|f}}]] on {{tmath|L^2(X, \mu).}} This can be shown by mimicking the linear algebraic argument for the matrix case above. {{tmath|\mathbf V T_f \mathbf V^*}} is the unique positive square root of {{tmath|\mathbf M^* \mathbf M,}} as given by the [[Borel functional calculus]] for [[self-adjoint operator]]s. The reason why {{tmath|\mathbf U}} need not be unitary is that, unlike the finite-dimensional case, given an isometry {{tmath|U_1}} with nontrivial kernel, a suitable {{tmath|U_2}} may not be found such that <math display=block> \begin{bmatrix} U_1 \\ U_2 \end{bmatrix} </math> is a unitary operator. As for matrices, the singular value factorization is equivalent to the [[polar decomposition]] for operators: we can simply write <math display=block> \mathbf M = \mathbf U \mathbf V^* \cdot \mathbf V T_f \mathbf V^* </math> and notice that {{tmath|\mathbf U \mathbf V^*}} is still a partial isometry while {{tmath|\mathbf V T_f \mathbf V^*}} is positive. === Singular values and compact operators === The notion of singular values and left/right-singular vectors can be extended to [[compact operator on Hilbert space]] as they have a discrete spectrum. If {{tmath|T}} is compact, every non-zero {{tmath|\lambda}} in its spectrum is an eigenvalue. Furthermore, a compact self-adjoint operator can be diagonalized by its eigenvectors. If {{tmath|\mathbf M}} is compact, so is {{tmath|\mathbf M^* \mathbf M}}. Applying the diagonalization result, the unitary image of its positive square root {{tmath|T_f}} has a set of orthonormal eigenvectors {{tmath| \{e_i\} }} corresponding to strictly positive eigenvalues {{tmath| \{\sigma_i\} }}. For any {{tmath|\psi}} in {{tmath|H,}} <math display=block> \mathbf{M} \psi = \mathbf{U} T_f \mathbf{V}^* \psi = \sum_i \left \langle \mathbf{U} T_f \mathbf{V}^* \psi, \mathbf{U} e_i \right \rangle \mathbf{U} e_i = \sum_i \sigma_i \left \langle \psi, \mathbf{V} e_i \right \rangle \mathbf{U} e_i, </math> where the series converges in the norm topology on {{tmath|H.}} Notice how this resembles the expression from the finite-dimensional case. {{tmath|\sigma_i}} are called the singular values of {{tmath|\mathbf M.}} {{tmath|\{\mathbf U e_i\} }} (resp. {{tmath|\{\mathbf U e_i\} }}) can be considered the left-singular (resp. right-singular) vectors of {{tmath|\mathbf M.}} Compact operators on a Hilbert space are the closure of [[finite-rank operator]]s in the uniform operator topology. The above series expression gives an explicit such representation. An immediate consequence of this is: :'''Theorem.''' {{tmath|\mathbf M}} is compact if and only if {{tmath|\mathbf M^* \mathbf M}} is compact.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)