Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Singular value decomposition
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Applications of the SVD== ===Pseudoinverse=== The singular value decomposition can be used for computing the [[Moore–Penrose pseudoinverse|pseudoinverse]] of a matrix. The pseudoinverse of the matrix {{tmath|\mathbf M}} with singular value decomposition {{tmath|\mathbf M {{=}} \mathbf U \mathbf \Sigma \mathbf V^*}} is <math display=block> \mathbf M^+ = \mathbf V \boldsymbol \Sigma^+ \mathbf U^\ast, </math> where <math>\boldsymbol \Sigma^+</math> is the pseudoinverse of <math>\boldsymbol \Sigma</math>, which is formed by replacing every non-zero diagonal entry by its [[Multiplicative inverse|reciprocal]] and transposing the resulting matrix. The pseudoinverse is one way to solve [[linear least squares (mathematics)|linear least squares]] problems. ===Solving homogeneous linear equations=== A set of [[homogeneous linear equation]]s can be written as {{tmath|\mathbf A \mathbf x {{=}} \mathbf 0}} for a matrix {{tmath|\mathbf A}} and vector {{tmath|\mathbf x.}} A typical situation is that {{tmath|\mathbf A}} is known and a non-zero {{tmath|\mathbf x}} is to be determined which satisfies the equation. Such an {{tmath|\mathbf x}} belongs to {{tmath|\mathbf A}}'s [[Kernel (matrix)|null space]] and is sometimes called a (right) null vector of {{tmath|\mathbf A.}} The vector {{tmath|\mathbf x}} can be characterized as a right-singular vector corresponding to a singular value of {{tmath|\mathbf A}} that is zero. This observation means that if {{tmath|\mathbf A}} is a [[square matrix]] and has no vanishing singular value, the equation has no non-zero {{tmath|\mathbf x}} as a solution. It also means that if there are several vanishing singular values, any linear combination of the corresponding right-singular vectors is a valid solution. Analogously to the definition of a (right) null vector, a non-zero {{tmath|\mathbf x}} satisfying {{tmath|\mathbf x^* \mathbf A {{=}} \mathbf 0}} with {{tmath|\mathbf x^*}} denoting the conjugate transpose of {{tmath|\mathbf x,}} is called a left null vector of {{tmath|\mathbf A.}} ===Total least squares minimization=== A [[total least squares]] problem seeks the vector {{tmath|\mathbf x}} that minimizes the [[Vector norm#p-norm|2-norm]] of a vector {{tmath|\mathbf A \mathbf x}} under the constraint <math> \| \mathbf x \| = 1.</math> The solution turns out to be the right-singular vector of {{tmath|\mathbf A}} corresponding to the smallest singular value. ===Range, null space and rank=== Another application of the SVD is that it provides an explicit representation of the [[Column space|range]] and [[null space]] of a matrix {{tmath|\mathbf M.}} The right-singular vectors corresponding to vanishing singular values of {{tmath|\mathbf M}} span the null space of {{tmath|\mathbf M}} and the left-singular vectors corresponding to the non-zero singular values of {{tmath|\mathbf M}} span the range of {{tmath|\mathbf M.}} For example, in the above [[#Example|example]] the null space is spanned by the last row of {{tmath|\mathbf V^*}} and the range is spanned by the first three columns of {{tmath|\mathbf U.}} As a consequence, the [[rank of a matrix|rank]] of {{tmath|\mathbf M}} equals the number of non-zero singular values which is the same as the number of non-zero diagonal elements in <math>\mathbf \Sigma</math>. In numerical linear algebra the singular values can be used to determine the ''effective rank'' of a matrix, as [[rounding error]] may lead to small but non-zero singular values in a rank deficient matrix. Singular values beyond a significant gap are assumed to be numerically equivalent to zero. ===Low-rank matrix approximation=== Some practical applications need to solve the problem of [[Low-rank approximation|approximating]] a matrix {{tmath|\mathbf M}} with another matrix <math>\tilde{\mathbf{M}}</math>, said to be [[#Truncated SVD|truncated]], which has a specific rank {{tmath|r}}. In the case that the approximation is based on minimizing the [[Frobenius norm]] of the difference between {{tmath|\mathbf M}} and {{tmath|\tilde{\mathbf M} }} under the constraint that <math>\operatorname{rank}\bigl(\tilde{\mathbf{M}}\bigr) = r,</math> it turns out that the solution is given by the SVD of {{tmath|\mathbf M,}} namely <math display=block> \tilde{\mathbf{M}} = \mathbf{U} \tilde{\mathbf \Sigma} \mathbf{V}^*, </math> where <math>\tilde{\mathbf \Sigma}</math> is the same matrix as <math>\mathbf \Sigma</math> except that it contains only the {{tmath|r}} largest singular values (the other singular values are replaced by zero). This is known as the '''[[Low-rank approximation|Eckart–Young theorem]]''', as it was proved by those two authors in 1936 (although it was later found to have been known to earlier authors; see {{harvnb|Stewart|1993}}). === Image compression === [[File:Svd compression.jpg|thumb|Singular-value decomposition (SVD) image compression of a 1996 Chevrolet Corvette photograph. The original RGB image (upper-left) is compared with rank 1, 10, and 100 reconstructions.|292x292px]]One practical consequence of the low-rank approximation given by SVD is that a greyscale image represented as an <math>m \times n</math> matrix <math>A</math>, can be efficiently represented by keeping the first <math>k</math> singular values and corresponding vectors. The truncated decomposition <math>A_k = \mathbf{U}_k \mathbf{\Sigma}_k \mathbf{V}^T_k</math> gives an image which minimizes the [[Frobenius norm|Frobenius error]] compared to the original image. Thus, the task becomes finding a close approximation <math>A_k</math> that balances retaining perceptual fidelity with the number of vectors required to reconstruct the image. Storing <math>A_k</math> requires only <math>k(n + m + 1)</math> numbers compared to <math>nm</math>. This same idea extends to color images by applying this operation to each channel or stacking the channels into one matrix. Since the singular values of most natural images decay quickly, most of their variance is often captured by a small <math>k</math>. For a 1528 × 1225 greyscale image, we can achieve a relative error of <math>.7%</math> with as little as <math>k = 100</math>.<ref>{{Cite book |author1=Holmes |first=Mark |title=Introduction to Scientific Computing and Data Analysis, 2nd Ed |publisher=Springer |year=2023 |isbn=978-3-031-22429-4}}</ref> In practice, however, computing the SVD can be too computationally expensive and the resulting compression is typically less storage efficient than a specialized algorithm such as [[JPEG]]. ===Separable models=== The SVD can be thought of as decomposing a matrix into a weighted, ordered sum of separable matrices. By separable, we mean that a matrix {{tmath|\mathbf A}} can be written as an [[outer product]] of two vectors {{tmath|\mathbf A {{=}} \mathbf u \otimes \mathbf v,}} or, in coordinates, {{tmath|A_{ij} {{=}} u_i v_j.}} Specifically, the matrix {{tmath|\mathbf M}} can be decomposed as, <math display=block> \mathbf{M} = \sum_i \mathbf{A}_i = \sum_i \sigma_i \mathbf U_i \otimes \mathbf V_i. </math> Here {{tmath|\mathbf U_i}} and {{tmath|\mathbf V_i}} are the {{tmath|i}}-th columns of the corresponding SVD matrices, {{tmath|\sigma_i}} are the ordered singular values, and each {{tmath|\mathbf A_i}} is separable. The SVD can be used to find the decomposition of an image processing filter into separable horizontal and vertical filters. Note that the number of non-zero {{tmath|\sigma_i}} is exactly the rank of the matrix.{{citation needed|date=November 2023}} Separable models often arise in biological systems, and the SVD factorization is useful to analyze such systems. For example, some visual area V1 simple cells' receptive fields can be well described<ref>{{cite journal |doi=10.1016/0166-2236(95)94496-R |last1=DeAngelis |first1=G. C. |last2=Ohzawa |first2=I. |last3=Freeman |first3=R. D. |title=Receptive-field dynamics in the central visual pathways |journal=Trends Neurosci. |volume=18 |issue=10 |pages=451–8 |date=October 1995 |pmid=8545912 |s2cid=12827601 }}</ref> by a [[Gabor filter]] in the space domain multiplied by a modulation function in the time domain. Thus, given a linear filter evaluated through, for example, [[Spike-triggered average|reverse correlation]], one can rearrange the two spatial dimensions into one dimension, thus yielding a two-dimensional filter (space, time) which can be decomposed through SVD. The first column of {{tmath|\mathbf U}} in the SVD factorization is then a Gabor while the first column of {{tmath|\mathbf V}} represents the time modulation (or vice versa). One may then define an index of separability <math display=block> \alpha = \frac{\sigma_1^2}{\sum_i \sigma_i^2}, </math> which is the fraction of the power in the matrix M which is accounted for by the first separable matrix in the decomposition.<ref>{{cite journal |last1=Depireux |first1=D. A. |last2=Simon |first2=J. Z. |last3=Klein |first3=D. J. |last4=Shamma |first4=S. A. |title=Spectro-temporal response field characterization with dynamic ripples in ferret primary auditory cortex |journal=J. Neurophysiol. |volume=85 |issue=3 |pages=1220–34 |date=March 2001 |pmid=11247991 |doi=10.1152/jn.2001.85.3.1220}}</ref> ===Nearest orthogonal matrix=== It is possible to use the SVD of a square matrix {{tmath|\mathbf A}} to determine the [[orthogonal matrix]] {{tmath|\mathbf O}} closest to {{tmath|\mathbf A.}} The closeness of fit is measured by the [[Frobenius norm]] of {{tmath|\mathbf O - \mathbf A.}} The solution is the product {{tmath|\mathbf U \mathbf V^*.}}<ref>[http://www.wou.edu/~beavers/Talks/Willamette1106.pdf The Singular Value Decomposition in Symmetric (Lowdin) Orthogonalization and Data Compression]</ref> This intuitively makes sense because an orthogonal matrix would have the decomposition {{tmath|\mathbf U \mathbf I \mathbf V^*}} where {{tmath|\mathbf I}} is the identity matrix, so that if {{tmath|\mathbf A {{=}} \mathbf U \mathbf \Sigma \mathbf V^*}} then the product {{tmath|\mathbf A {{=}} \mathbf U \mathbf V^*}} amounts to replacing the singular values with ones. Equivalently, the solution is the unitary matrix {{tmath|\mathbf R {{=}} \mathbf U \mathbf V^*}} of the Polar Decomposition <math>\mathbf M = \mathbf R \mathbf P = \mathbf P' \mathbf R</math> in either order of stretch and rotation, as described above. A similar problem, with interesting applications in [[shape analysis (digital geometry)|shape analysis]], is the [[orthogonal Procrustes problem]], which consists of finding an orthogonal matrix {{tmath|\mathbf O}} which most closely maps {{tmath|\mathbf A}} to {{tmath|\mathbf B.}} Specifically, <math display=block> \mathbf{O} = \underset\Omega\operatorname{argmin} \|\mathbf{A}\boldsymbol{\Omega} - \mathbf{B}\|_F \quad\text{subject to}\quad \boldsymbol{\Omega}^\operatorname{T}\boldsymbol{\Omega} = \mathbf{I}, </math> where <math>\| \cdot \|_F</math> denotes the Frobenius norm. This problem is equivalent to finding the nearest orthogonal matrix to a given matrix <math>\mathbf M = \mathbf A^\operatorname{T} \mathbf B</math>. ===The Kabsch algorithm=== The [[Kabsch algorithm]] (called [[Wahba's problem]] in other fields) uses SVD to compute the optimal rotation (with respect to least-squares minimization) that will align a set of points with a corresponding set of points. It is used, among other applications, to compare the structures of molecules. ===Principal Component Analysis=== The SVD can be used to construct the principal components<ref>{{cite book |last=Hastie |first=Trevor |author2=Robert Tibshirani |author3=Jerome Friedman |title=The Elements of Statistical Learning |edition=2nd |year=2009 |publisher=Springer |location=New York |pages=535–536 |isbn=978-0-387-84857-0}}</ref> in [[principal component analysis]] as follows: Let <math>\mathbf{X} \in \mathbb{R}^{N \times p}</math> be a data matrix where each of the <math>N</math> rows is a (feature-wise) mean-centered observation, each of dimension <math>p</math>. The SVD of <math>\mathbf{X}</math> is: <math display="block"> \mathbf{X} = \mathbf{V} \boldsymbol{\Sigma} \mathbf{U}^\ast </math> From the same reference,<ref>{{cite book |last=Hastie |first=Trevor |author2=Robert Tibshirani |author3=Jerome Friedman |title=The Elements of Statistical Learning |edition=2nd |year=2009 |publisher=Springer |location=New York |pages=535–536 |isbn=978-0-387-84857-0}}</ref> we see that <math>\mathbf{V} \boldsymbol{\Sigma}</math> contains the scores of the rows of <math>\mathbf{X}</math> (i.e. each observation), and <math>\mathbf{U}</math> is the matrix whose columns are principal component loading vectors. ===Signal processing=== The SVD and pseudoinverse have been successfully applied to [[signal processing]],<ref>{{cite journal |last=Sahidullah |first=Md. |author2=Kinnunen, Tomi |title=Local spectral variability features for speaker verification |journal=Digital Signal Processing |date=March 2016 |volume=50 |pages=1–11 |doi=10.1016/j.dsp.2015.10.011 |bibcode=2016DSP....50....1S |url= https://erepo.uef.fi/handle/123456789/4375}}<!--https://erepo.uef.fi/handle/123456789/4375--></ref> [[image processing]]<ref name="Mademlis2018">{{cite book |last1=Mademlis |first1=Ioannis |last2=Tefas |first2=Anastasios |last3=Pitas |first3=Ioannis |title=2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) |chapter=Regularized SVD-Based Video Frame Saliency for Unsupervised Activity Video Summarization |url=https://ieeexplore.ieee.org/document/8462274 |year=2018 |pages=2691–2695 |publisher=IEEE |doi=10.1109/ICASSP.2018.8462274 |isbn=978-1-5386-4658-8 |s2cid=52286352 |access-date=19 January 2023}}</ref> and [[big data]] (e.g., in genomic signal processing).<ref> {{Cite journal | author = O. Alter, P. O. Brown and D. Botstein | title = Singular Value Decomposition for Genome-Wide Expression Data Processing and Modeling | journal = PNAS | volume = 97 | issue = 18 | pages = 10101–10106 | date = September 2000 | doi = 10.1073/pnas.97.18.10101 | pmid = 10963673 | pmc = 27718 | bibcode = 2000PNAS...9710101A | doi-access = free }}</ref><ref>{{Cite journal | author1 = O. Alter | author2 = G. H. Golub | title = Integrative Analysis of Genome-Scale Data by Using Pseudoinverse Projection Predicts Novel Correlation Between DNA Replication and RNA Transcription | journal = PNAS | volume = 101 | issue = 47 | pages = 16577–16582 | date = November 2004 | doi = 10.1073/pnas.0406767101 | pmid = 15545604 | pmc = 534520 | bibcode = 2004PNAS..10116577A | doi-access = free }}</ref><ref>{{Cite journal | author1 = O. Alter | author2 = G. H. Golub | title = Singular Value Decomposition of Genome-Scale mRNA Lengths Distribution Reveals Asymmetry in RNA Gel Electrophoresis Band Broadening | journal = PNAS | volume = 103 | issue = 32 | pages = 11828–11833 | date = August 2006 | doi = 10.1073/pnas.0604756103 | pmid = 16877539 | pmc = 1524674 | bibcode = 2006PNAS..10311828A | doi-access = free }}</ref><ref>{{Cite journal | first1 = N. M. | last1 = Bertagnolli | first2 = J. A. | last2 = Drake | first3 = J. M. | last3 = Tennessen | first4 = O. | last4 = Alter | title = SVD Identifies Transcript Length Distribution Functions from DNA Microarray Data and Reveals Evolutionary Forces Globally Affecting GBM Metabolism | journal = PLOS ONE | volume = 8 | issue = 11 | pages = e78913 | date = November 2013 | doi = 10.1371/journal.pone.0078913 | id = [http://www.alterlab.org/research/highlights/pone.0078913_Highlight.pdf Highlight] | pmid = 24282503 | pmc = 3839928 | bibcode = 2013PLoSO...878913B | doi-access = free }}</ref> ===Other examples=== The SVD is also applied extensively to the study of linear [[inverse problem]]s and is useful in the analysis of regularization methods such as that of [[Tikhonov regularization|Tikhonov]]. It is widely used in statistics, where it is related to [[principal component analysis]] and to [[correspondence analysis]], and in [[signal processing]] and [[pattern recognition]]. It is also used in output-only [[modal analysis]], where the non-scaled [[mode shape]]s can be determined from the singular vectors. Yet another usage is [[latent semantic indexing]] in natural-language text processing. In general numerical computation involving linear or linearized systems, there is a universal constant that characterizes the regularity or singularity of a problem, which is the system's "condition number" <math>\kappa := \sigma_\text{max} / \sigma_\text{min}</math>. It often controls the error rate or convergence rate of a given computational scheme on such systems.<ref>{{cite journal |last1=Edelman |first1=Alan |title=On the distribution of a scaled condition number |url=http://math.mit.edu/~edelman/publications/distribution_of_a_scaled.pdf |journal=Math. Comp. |volume=58 |pages=185–190 |year=1992|issue=197 |doi=10.1090/S0025-5718-1992-1106966-2 |bibcode=1992MaCom..58..185E |doi-access=free }}</ref><ref>{{cite journal |last1=Shen |first1=Jianhong (Jackie) |title=On the singular values of Gaussian random matrices |journal=Linear Alg. Appl. |volume=326 |pages=1–14 |year=2001|issue=1–3 |doi=10.1016/S0024-3795(00)00322-0 |doi-access=free }}</ref> The SVD also plays a crucial role in the field of [[quantum information]], in a form often referred to as the [[Schmidt decomposition]]. Through it, states of two quantum systems are naturally decomposed, providing a necessary and sufficient condition for them to be [[Quantum entanglement|entangled]]: if the rank of the <math>\mathbf \Sigma</math> matrix is larger than one. One application of SVD to rather large matrices is in [[numerical weather prediction]], where [[Lanczos algorithm|Lanczos methods]] are used to estimate the most linearly quickly growing few perturbations to the central numerical weather prediction over a given initial forward time period; i.e., the singular vectors corresponding to the largest singular values of the linearized propagator for the global weather over that time interval. The output singular vectors in this case are entire weather systems. These perturbations are then run through the full nonlinear model to generate an [[ensemble forecasting|ensemble forecast]], giving a handle on some of the uncertainty that should be allowed for around the current central prediction. SVD has also been applied to reduced order modelling. The aim of reduced order modelling is to reduce the number of degrees of freedom in a complex system which is to be modeled. SVD was coupled with [[radial basis functions]] to interpolate solutions to three-dimensional unsteady flow problems.<ref>{{cite journal | last1 = Walton | first1 = S. | last2 = Hassan | first2 = O. | last3 = Morgan | first3 = K. | year = 2013| title = Reduced order modelling for unsteady fluid flow using proper orthogonal decomposition and radial basis functions | journal = Applied Mathematical Modelling | volume = 37| issue = 20–21| pages = 8930–8945| doi=10.1016/j.apm.2013.04.025| doi-access = free }}</ref> Interestingly, SVD has been used to improve gravitational waveform modeling by the ground-based gravitational-wave interferometer aLIGO.<ref>{{cite journal | last1 = Setyawati | first1 = Y. | last2 = Ohme | first2 = F. | last3 = Khan | first3 = S. | year = 2019| title = Enhancing gravitational waveform model through dynamic calibration | journal = Physical Review D | volume = 99| issue =2 | pages = 024010| doi=10.1103/PhysRevD.99.024010| bibcode = 2019PhRvD..99b4010S | arxiv = 1810.07060 | s2cid = 118935941 }}</ref> SVD can help to increase the accuracy and speed of waveform generation to support gravitational-waves searches and update two different waveform models. Singular value decomposition is used in [[recommender systems]] to predict people's item ratings.<ref>{{cite report |last1=Sarwar |first1=Badrul |last2=Karypis |first2=George |last3=Konstan |first3=Joseph A. |author3-link=Joseph A. Konstan |last4=Riedl |first4=John T. |author4-link=John T. Riedl |name-list-style=amp |year=2000 |title=Application of Dimensionality Reduction in Recommender System – A Case Study |url=https://apps.dtic.mil/sti/citations/tr/ADA439541 |journal= |publisher=[[University of Minnesota]]|hdl=11299/215429|type=Technical report 00-043}}</ref> Distributed algorithms have been developed for the purpose of calculating the SVD on clusters of commodity machines.<ref>{{cite arXiv |last1=Bosagh Zadeh |first1=Reza |last2=Carlsson |first2=Gunnar |title=Dimension Independent Matrix Square Using MapReduce |year=2013 |class=cs.DS |eprint=1304.1467}}</ref> Low-rank SVD has been applied for hotspot detection from spatiotemporal data with application to disease [[outbreak]] detection.<ref>{{Cite journal |author1=Hadi Fanaee Tork |author2=João Gama |title = Eigenspace method for spatiotemporal hotspot detection |journal = Expert Systems |volume=32 |issue=3 |pages = 454–464 |date = September 2014 |doi = 10.1111/exsy.12088 |arxiv=1406.3506 |bibcode=2014arXiv1406.3506F |s2cid=15476557 }}</ref> A combination of SVD and [[Higher-order singular value decomposition|higher-order SVD]] also has been applied for real time event detection from complex data streams (multivariate data with space and time dimensions) in [[disease surveillance]].<ref>{{Cite journal |author1=Hadi Fanaee Tork |author2=João Gama |title = EigenEvent: An Algorithm for Event Detection from Complex Data Streams in Syndromic Surveillance |journal = Intelligent Data Analysis |volume = 19 |issue = 3 |pages=597–616 |date = May 2015 |arxiv = 1406.3496 |doi=10.3233/IDA-150734|s2cid=17966555 }}</ref> In [[astrodynamics]], the SVD and its variants are used as an option to determine suitable maneuver directions for transfer trajectory design<ref name=muralidharan2023stretching>{{Cite journal|title=Stretching directions in cislunar space: Applications for departures and transfer design|first1=Vivek|last1=Muralidharan|first2=Kathleen|last2=Howell|journal =Astrodynamics | volume = 7 | issue = 2| pages = 153–178 | date = 2023 | doi = 10.1007/s42064-022-0147-z | bibcode = 2023AsDyn...7..153M|s2cid=252637213 }}</ref> and [[orbital station-keeping]].<ref name=Muralidharan2021>{{Cite journal|title=Leveraging stretching directions for stationkeeping in Earth-Moon halo orbits |first1=Vivek|last1=Muralidharan|first2=Kathleen|last2=Howell|journal =[[Advances in Space Research]] | volume = 69 | issue = 1| pages = 620–646 | date = 2022 | doi = 10.1016/j.asr.2021.10.028 | bibcode = 2022AdSpR..69..620M|s2cid=239490016 }}</ref> The SVD can be used to measure the similarity between real-valued matrices.<ref name=albers2025>{{Cite journal|title=Assessing the Similarity of Real Matrices with Arbitrary Shape|first1=Jasper|last1=Albers|first2=Anno|last2=Kurth|first3=Robin|last3=Gutzen|first4=Aitor|last4=Morales-Gregorio|first5=Michael|last5=Denker|first6=Sonja|last6=Gruen|first7=Sacha|last7=van Albada|first8=Markus|last8=Diesmann|journal=PRX Life| issue = 3| pages = 023005 | date = 2025 | doi = 10.1103/PRXLife.3.023005 |arxiv=2403.17687}}</ref> By measuring the angles between the singular vectors, the inherent two-dimensional structure of matrices is accounted for. This method was shown to outperform [[cosine similarity]] and [[Frobenius norm]] in most cases, including brain activity measurements from [[neuroscience]] experiments.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)