Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Singular value decomposition
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Numerical approach === The singular value decomposition can be computed using the following observations: * The left-singular vectors of {{tmath|\mathbf M}} are a set of [[orthonormal]] [[eigenvectors]] of {{tmath|\mathbf M \mathbf M^*}}. * The right-singular vectors of {{tmath|\mathbf M}} are a set of orthonormal eigenvectors of {{tmath|\mathbf M^* \mathbf M}}. * The non-zero singular values of {{tmath|\mathbf M}} (found on the diagonal entries of <math>\mathbf \Sigma</math>) are the square roots of the non-zero [[eigenvalues]] of both {{tmath|\mathbf M^* \mathbf M}} and {{tmath|\mathbf M \mathbf M^*}}. The SVD of a matrix {{tmath|\mathbf M}} is typically computed by a two-step procedure. In the first step, the matrix is reduced to a [[bidiagonal matrix]]. This takes [[big O notation|order]] {{tmath|O(mn^2)}} floating-point operations (flop), assuming that {{tmath|m \geq n.}} The second step is to compute the SVD of the bidiagonal matrix. This step can only be done with an [[iterative method]] (as with [[eigenvalue algorithm]]s). However, in practice it suffices to compute the SVD up to a certain precision, like the [[machine epsilon]]. If this precision is considered constant, then the second step takes {{tmath|O(n)}} iterations, each costing {{tmath|O(n)}} flops. Thus, the first step is more expensive, and the overall cost is {{tmath|O(mn^2)}} flops {{harv|Trefethen|Bau III|1997|loc=Lecture 31}}. The first step can be done using [[Householder reflection]]s for a cost of {{tmath|4mn^2 - 4n^3/3}} flops, assuming that only the singular values are needed and not the singular vectors. If {{tmath|m}} is much larger than {{tmath|n}} then it is advantageous to first reduce the matrix {{tmath|\mathbf M}} to a triangular matrix with the [[QR decomposition]] and then use Householder reflections to further reduce the matrix to bidiagonal form; the combined cost is {{tmath|2mn^2 + 2n^3}} flops {{harv|Trefethen|Bau III|1997|loc=Lecture 31}}. The second step can be done by a variant of the [[QR algorithm]] for the computation of eigenvalues, which was first described by {{harvtxt|Golub|Kahan|1965}}. The [[LAPACK]] subroutine DBDSQR<ref>[http://www.netlib.org/lapack/double/dbdsqr.f Netlib.org]</ref> implements this iterative method, with some modifications to cover the case where the singular values are very small {{harv|Demmel|Kahan|1990}}. Together with a first step using Householder reflections and, if appropriate, QR decomposition, this forms the DGESVD<ref>[http://www.netlib.org/lapack/double/dgesvd.f Netlib.org]</ref> routine for the computation of the singular value decomposition. The same algorithm is implemented in the [[GNU Scientific Library]] (GSL). The GSL also offers an alternative method that uses a one-sided [[Jacobi orthogonalization]] in step 2 {{harv|GSL Team|2007}}. This method computes the SVD of the bidiagonal matrix by solving a sequence of {{tmath|2 \times 2}} SVD problems, similar to how the [[Jacobi eigenvalue algorithm]] solves a sequence of {{tmath|2 \times 2}} eigenvalue methods {{harv|Golub|Van Loan|1996|loc=Β§8.6.3}}. Yet another method for step 2 uses the idea of [[divide-and-conquer eigenvalue algorithm]]s {{harv|Trefethen|Bau III|1997|loc=Lecture 31}}. There is an alternative way that does not explicitly use the eigenvalue decomposition.<ref>[http://www.mathworks.co.kr/matlabcentral/fileexchange/12674-simple-svd mathworks.co.kr/matlabcentral/fileexchange/12674-simple-svd]</ref> Usually the singular value problem of a matrix {{tmath|\mathbf M}} is converted into an equivalent symmetric eigenvalue problem such as {{tmath|\mathbf M \mathbf M^*,}} {{tmath|\mathbf M^* \mathbf M,}} or <math display=block> \begin{bmatrix} \mathbf{0} & \mathbf{M} \\ \mathbf{M}^* & \mathbf{0} \end{bmatrix}. </math> The approaches that use eigenvalue decompositions are based on the [[QR algorithm]], which is well-developed to be stable and fast. Note that the singular values are real and right- and left- singular vectors are not required to form similarity transformations. One can iteratively alternate between the [[QR decomposition]] and the [[LQ decomposition]] to find the real diagonal [[Hermitian matrix|Hermitian matrices]]. The [[QR decomposition]] gives {{tmath|\mathbf M \Rightarrow \mathbf Q \mathbf R}} and the [[LQ decomposition]] of {{tmath|\mathbf R}} gives {{tmath|\mathbf R \Rightarrow \mathbf L \mathbf P^*.}} Thus, at every iteration, we have {{tmath|\mathbf M \Rightarrow \mathbf Q \mathbf L \mathbf P^*,}} update {{tmath|\mathbf M \Leftarrow \mathbf L}} and repeat the orthogonalizations. Eventually,{{clarify|date=April 2021}} this iteration between [[QR decomposition]] and [[LQ decomposition]] produces left- and right- unitary singular matrices. This approach cannot readily be accelerated, as the QR algorithm can with spectral shifts or deflation. This is because the shift method is not easily defined without using similarity transformations. However, this iterative approach is very simple to implement, so is a good choice when speed does not matter. This method also provides insight into how purely orthogonal/unitary transformations can obtain the SVD.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)