Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
QR algorithm
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Algorithm to calculate eigenvalues}} In [[numerical linear algebra]], the '''QR algorithm''' or '''QR iteration''' is an [[eigenvalue algorithm]]: that is, a procedure to calculate the [[eigenvalues and eigenvectors]] of a [[Matrix (mathematics)|matrix]]. The QR algorithm was developed in the late 1950s by [[John G. F. Francis]] and by [[Vera N. Kublanovskaya]], working independently.<ref>J.G.F. Francis, "The QR Transformation, I", ''[[The Computer Journal]]'', '''4'''(3), pages 265–271 (1961, received October 1959). [[doi:10.1093/comjnl/4.3.265]]</ref><ref>{{cite journal |first=J. G. F. |last=Francis |title=The QR Transformation, II |journal=The Computer Journal |volume=4 |issue=4 |pages=332–345 |year=1962 |doi=10.1093/comjnl/4.4.332 |doi-access=free }}</ref><ref> Vera N. Kublanovskaya, "On some algorithms for the solution of the complete eigenvalue problem," ''USSR Computational Mathematics and Mathematical Physics'', vol. 1, no. 3, pages 637–657 (1963, received Feb 1961). Also published in: ''Zhurnal Vychislitel'noi Matematiki i Matematicheskoi Fiziki'', vol.1, no. 4, pages 555–570 (1961). [[doi:10.1016/0041-5553(63)90168-X]]</ref> The basic idea is to perform a [[QR decomposition]], writing the matrix as a product of an [[orthogonal matrix]] and an upper [[triangular matrix]], multiply the factors in the reverse order, and iterate. ==The practical QR algorithm== Formally, let {{math|''A''}} be a real matrix of which we want to compute the eigenvalues, and let {{math|1=''A''<sub>0</sub> := ''A''}}. At the {{mvar|k}}-th step (starting with {{math|1=''k'' = 0}}), we compute the [[QR decomposition]] {{math|1=''A''<sub>''k''</sub> = ''Q''<sub>''k''</sub> ''R''<sub>''k''</sub>}} where {{math|''Q''<sub>''k''</sub>}} is an [[orthogonal matrix]] (i.e., {{math|1=''Q''<sup>T</sup> = ''Q''<sup>−1</sup>}}) and {{math|''R''<sub>''k''</sub>}} is an upper triangular matrix. We then form {{math|1=''A''<sub>''k''+1</sub> = ''R''<sub>''k''</sub> ''Q''<sub>''k''</sub>}}. Note that <math display="block"> A_{k+1} = R_k Q_k = Q_k^{-1} Q_k R_k Q_k = Q_k^{-1} A_k Q_k = Q_k^{\mathsf{T}} A_k Q_k, </math> so all the {{math|''A''<sub>''k''</sub>}} are [[Similar matrix|similar]] and hence they have the same eigenvalues. The algorithm is [[numerical stability|numerically stable]] because it proceeds by ''orthogonal'' similarity transforms. Under certain conditions,<ref name="golubvanloan">{{cite book |last1=Golub |first1=G. H. |last2=Van Loan |first2=C. F. |title=Matrix Computations |edition=3rd |publisher=Johns Hopkins University Press |location=Baltimore |year=1996 |isbn=0-8018-5414-8 }}</ref> the matrices ''A''<sub>''k''</sub> converge to a triangular matrix, the [[Schur form]] of ''A''. The eigenvalues of a triangular matrix are listed on the diagonal, and the eigenvalue problem is solved. In testing for convergence it is impractical to require exact zeros,{{citation needed|date=July 2020}} but the [[Gershgorin circle theorem]] provides a bound on the error. If the matrices converge, then the eigenvalues along the diagonal will appear according to their geometric multiplicity. To guarantee convergence, A must be a symmetric matrix, and for all non zero eigenvalues <math>\lambda</math> there must not be a corresponding eigenvalue <math>-\lambda</math>.<ref>{{Cite book |last=Holmes |first=Mark H. |title=Introduction to scientific computing and data analysis |date=2023 |publisher=Springer |isbn=978-3-031-22429-4 |edition=Second |series=Texts in computational science and engineering |location=Cham}}</ref> Due to the fact that a single QR iteration has a cost of <math>\mathcal{O}(n^3)</math> and the convergence is linear, the standard QR algorithm is extremely expensive to compute, especially considering it is not guaranteed to converge.<ref>{{Cite book |last=Golub |first=Gene H. |title=Matrix computations |last2=Van Loan |first2=Charles F. |date=2013 |publisher=The Johns Hopkins University Press |isbn=978-1-4214-0794-4 |edition=Fourth |series=Johns Hopkins studies in the mathematical sciences |location=Baltimore}}</ref> ===Using Hessenberg form=== In the above crude form the iterations are relatively expensive. This can be mitigated by first bringing the matrix {{mvar|A}} to upper [[Hessenberg form]] (which costs <math display="inline">\tfrac{10}{3} n^3 + \mathcal{O}(n^2)</math> arithmetic operations using a technique based on [[Householder transformation|Householder reduction]]), with a finite sequence of orthogonal similarity transforms, somewhat like a two-sided QR decomposition.<ref name=Demmel>{{cite book |first=James W. |last=Demmel |author-link=James W. Demmel |title=Applied Numerical Linear Algebra |publisher=SIAM |year=1997 }}</ref><ref name=Trefethen>{{cite book |first1=Lloyd N. |last1=Trefethen |author-link=Lloyd N. Trefethen |first2=David |last2=Bau |title=Numerical Linear Algebra |publisher=SIAM |year=1997 }}</ref> (For QR decomposition, the Householder reflectors are multiplied only on the left, but for the Hessenberg case they are multiplied on both left and right.) Determining the QR decomposition of an upper Hessenberg matrix costs <math display="inline">6 n^2 + \mathcal{O}(n)</math> arithmetic operations. Moreover, because the Hessenberg form is already nearly upper-triangular (it has just one nonzero entry below each diagonal), using it as a starting point reduces the number of steps required for convergence of the QR algorithm. If the original matrix is [[symmetric matrix|symmetric]], then the upper Hessenberg matrix is also symmetric and thus [[tridiagonal matrix|tridiagonal]], and so are all the {{math|''A''<sub>''k''</sub>}}. In this case reaching Hessenberg form costs <math display="inline">\tfrac{4}{3} n^3 + \mathcal{O}(n^2)</math> arithmetic operations using a technique based on Householder reduction.<ref name=Demmel/><ref name=Trefethen/> Determining the QR decomposition of a symmetric tridiagonal matrix costs <math>\mathcal{O}(n)</math> operations.<ref>{{cite journal |first1=James M. |last1=Ortega |first2=Henry F. |last2=Kaiser |title=The ''LL<sup>T</sup>'' and ''QR'' methods for symmetric tridiagonal matrices |journal=The Computer Journal |volume=6 |issue=1 |pages=99–101 |year=1963 |doi=10.1093/comjnl/6.1.99 |doi-access=free }}</ref> ===Iteration phase=== If a Hessenberg matrix <math>A</math> has element <math> a_{k,k-1} = 0 </math> for some <math>k</math>, i.e., if one of the elements just below the diagonal is in fact zero, then it decomposes into blocks whose eigenproblems may be solved separately; an eigenvalue is either an eigenvalue of the submatrix of the first <math>k-1</math> rows and columns, or an eigenvalue of the submatrix of remaining rows and columns. The purpose of the QR iteration step is to shrink one of these <math>a_{k,k-1}</math> elements so that effectively a small block along the diagonal is split off from the bulk of the matrix. In the case of a real eigenvalue that is usually the <math> 1 \times 1 </math> block in the lower right corner (in which case element <math> a_{nn} </math> holds that eigenvalue), whereas in the case of a pair of conjugate complex eigenvalues it is the <math> 2 \times 2 </math> block in the lower right corner. The [[rate of convergence]] depends on the separation between eigenvalues, so a practical algorithm will use shifts, either explicit or implicit, to increase separation and accelerate convergence. A typical symmetric QR algorithm isolates each eigenvalue (then reduces the size of the matrix) with only one or two iterations, making it efficient as well as robust.{{clarify|date=June 2012}} ====A single iteration with explicit shift==== The steps of a QR iteration with explicit shift on a real Hessenberg matrix <math>A</math> are: # Pick a shift <math>\mu</math> and subtract it from all diagonal elements, producing the matrix <math> A - \mu I </math>. A basic strategy is to use <math> \mu = a_{n,n} </math>, but there are more refined strategies that would further accelerate convergence. The idea is that <math> \mu </math> should be close to an eigenvalue, since making this shift will accelerate convergence to that eigenvalue. # Perform a sequence of [[Givens rotation]]s <math> G_1, G_2, \dots, G_{n-1} </math> on <math> A - \mu I </math>, where <math> G_i </math> acts on rows <math>i</math> and <math>i+1</math>, and <math> G_i </math> is chosen to zero out position <math>(i+1,i)</math> of <math> G_{i-1} \dotsb G_1 (A - \mu I) </math>. This produces the upper triangular matrix <math> R = G_{n-1} \dotsb G_1 (A - \mu I) </math>. The orthogonal factor <math> Q </math> would be <math> G_1^\mathrm{T} G_2^\mathrm{T} \dotsb G_{n-1}^\mathrm{T} </math>, but it is neither necessary nor efficient to produce that explicitly. # Now multiply <math> R </math> by the Givens matrices <math> G_1^\mathrm{T} </math>, <math> G_2^\mathrm{T} </math>, ..., <math> G_{n-1}^\mathrm{T} </math> on the right, where <math> G_i^\mathrm{T} </math> instead acts on columns <math>i</math> and <math> i+1 </math>. This produces the matrix <math> RQ = R G_1^\mathrm{T} G_2^\mathrm{T} \dotsb G_{n-1}^\mathrm{T} </math>, which is again on Hessenberg form. # Finally undo the shift by adding <math> \mu </math> to all diagonal entries. The result is <math> A' = RQ + \mu I </math>. Since <math>Q</math> commutes with <math>I</math>, we have that <math> A' = Q^\mathrm{T} (A-\mu I) Q + \mu I = Q^\mathrm{T} A Q </math>. The purpose of the shift is to change which Givens rotations are chosen. In more detail, the structure of one of these <math> G_i </math> matrices are <math display="block"> G_i = \begin{bmatrix} I & 0 & 0 & 0 \\ 0 & c & -s & 0 \\ 0 & s & c & 0 \\ 0 & 0 & 0 & I \end{bmatrix} </math> where the <math>I</math> in the upper left corner is an <math> (n-1) \times (n-1) </math> identity matrix, and the two scalars <math> c = \cos\theta </math> and <math> s = \sin\theta </math> are determined by what rotation angle <math> \theta </math> is appropriate for zeroing out position <math>(i+1,i)</math>. It is not necessary to exhibit <math> \theta </math>; the factors <math> c </math> and <math> s </math> can be determined directly from elements in the matrix <math> G_i </math> should act on. Nor is it necessary to produce the whole matrix; multiplication (from the left) by <math> G_i </math> only affects rows <math> i </math> and <math> i+1 </math>, so it is easier to just update those two rows in place. Likewise, for the Step 3 multiplication by <math> G_i^\mathrm{T} </math> from the right, it is sufficient to remember <math>i</math>, <math>c</math>, and <math>s</math>. If using the simple <math> \mu = a_{n,n} </math> strategy, then at the beginning of Step 2 we have a matrix <math display="block"> A - a_{n,n} I = \begin{pmatrix} \times & \times & \times & \times & \times \\ \times & \times & \times & \times & \times \\ 0 & \times & \times & \times & \times \\ 0 & 0 & \times & \times & \times \\ 0 & 0 & 0 & \times & 0 \end{pmatrix} </math> where the <math>\times</math> denotes “could be whatever”. The first Givens rotation <math> G_1 </math> zeroes out the <math> (i+1,i) </math> position of this, producing <math display="block"> G_1 (A - a_{n,n} I) = \begin{pmatrix} \times & \times & \times & \times & \times \\ 0 & \times & \times & \times & \times \\ 0 & \times & \times & \times & \times \\ 0 & 0 & \times & \times & \times \\ 0 & 0 & 0 & \times & 0 \end{pmatrix} \text{.} </math> Each new rotation zeroes out another subdiagonal element, thus increasing the number of known zeroes until we are at <math display="block"> H = G_{n-2} \dotsb G_1 (A - a_{n,n} I) = \begin{pmatrix} \times & \times & \times & \times & \times \\ 0 & \times & \times & \times & \times \\ 0 & 0 & \times & \times & \times \\ 0 & 0 & 0 & h_{n-1,n-1} & h_{n-1,n} \\ 0 & 0 & 0 & h_{n,n-1} & 0 \end{pmatrix} \text{.} </math> The final rotation <math>G_{n-1}</math> has <math>(c,s)</math> chosen so that <math> s h_{n-1,n-1} + c h_{n,n-1} = 0 </math>. If <math> |h_{n-1,n-1}| \gg |h_{n,n-1}| </math>, as is typically the case when we approach convergence, then <math> c \approx 1 </math> and <math> |s| \ll 1 </math>. Making this rotation produces <math display="block"> R = G_{n-1} G_{n-2} \dotsb G_1 (A - a_{n,n} I) = \begin{pmatrix} \times & \times & \times & \times & \times \\ 0 & \times & \times & \times & \times \\ 0 & 0 & \times & \times & \times \\ 0 & 0 & 0 & \times & c h_{n-1,n} \\ 0 & 0 & 0 & 0 & s h_{n-1,n} \end{pmatrix} \text{,} </math> which is our upper triangular matrix. But now we reach Step 3, and need to start rotating data between columns. The first rotation acts on columns <math>1</math> and <math>2</math>, producing <math display="block"> R G_1^\mathrm{T} = \begin{pmatrix} \times & \times & \times & \times & \times \\ \times & \times & \times & \times & \times \\ 0 & 0 & \times & \times & \times \\ 0 & 0 & 0 & \times & c h_{n-1,n} \\ 0 & 0 & 0 & 0 & s h_{n-1,n} \end{pmatrix} \text{.} </math> The expected pattern is that each rotation moves some nonzero value from the diagonal out to the subdiagonal, returning the matrix to Hessenberg form. This ends at <math display="block"> R G_1^\mathrm{T} \dotsb G_{n-1}^\mathrm{T} = \begin{pmatrix} \times & \times & \times & \times & \times \\ \times & \times & \times & \times & \times \\ 0 & \times & \times & \times & \times \\ 0 & 0 & \times & \times & \times \\ 0 & 0 & 0 & -s^2 h_{n-1,n} & cs h_{n-1,n} \end{pmatrix} \text{.} </math> Algebraically the form is unchanged, but numerically the element in position <math> (n,n-1) </math> has gotten a lot closer to zero: there used to be a factor <math> s </math> gap between it and the diagonal element above, but now the gap is more like a factor <math> s^2 </math>, and another iteration would make it factor <math> s^4 </math>; we have quadratic convergence. Practically that means <math>O(1)</math> iterations per eigenvalue suffice for convergence, and thus overall we can complete in <math> O(n) </math> QR steps, each of which does a mere <math> O(n^2) </math> arithmetic operations (or as little as <math> O(n) </math> operations, in the case that <math> A </math> is symmetric). == Visualization == [[File:QR and LR visualization illustrating fixed points (corrected).gif|thumb|Figure 1: How the output of a single iteration of the QR or LR algorithm varies alongside its input]] The basic QR algorithm can be visualized in the case where ''A'' is a positive-definite symmetric matrix. In that case, ''A'' can be depicted as an [[ellipse]] in 2 dimensions or an [[ellipsoid]] in higher dimensions. The relationship between the input to the algorithm and a single iteration can then be depicted as in Figure 1 (click to see an animation). Note that the LR algorithm is depicted alongside the QR algorithm. A single iteration causes the ellipse to tilt or "fall" towards the x-axis. In the event where the large [[Semi-major and semi-minor axes|semi-axis]] of the ellipse is parallel to the x-axis, one iteration of QR does nothing. Another situation where the algorithm "does nothing" is when the large semi-axis is parallel to the y-axis instead of the x-axis. In that event, the ellipse can be thought of as balancing precariously without being able to fall in either direction. In both situations, the matrix is diagonal. A situation where an iteration of the algorithm "does nothing" is called a [[Fixed point (mathematics)|fixed point]]. The strategy employed by the algorithm is [[Fixed-point iteration|iteration towards a fixed-point]]. Observe that one fixed point is stable while the other is unstable. If the ellipse were tilted away from the unstable fixed point by a very small amount, one iteration of QR would cause the ellipse to tilt away from the fixed point instead of towards. Eventually though, the algorithm would converge to a different fixed point, but it would take a long time. === Finding eigenvalues versus finding eigenvectors === [[File:Qr lr eigenvalue clash.gif|thumb|Figure 2: How the output of a single iteration of QR or LR are affected when two eigenvalues approach each other]] It's worth pointing out that finding even a single eigenvector of a symmetric matrix is not computable (in exact real arithmetic according to the definitions in [[computable analysis]]).<ref>{{Cite web|title=linear algebra - Why is uncomputability of the spectral decomposition not a problem?|url=https://mathoverflow.net/questions/369930/why-is-uncomputability-of-the-spectral-decomposition-not-a-problem|access-date=2021-08-09|website=MathOverflow}}</ref> This difficulty exists whenever the multiplicities of a matrix's eigenvalues are not knowable. On the other hand, the same problem does not exist for finding eigenvalues. The eigenvalues of a matrix are always computable. We will now discuss how these difficulties manifest in the basic QR algorithm. This is illustrated in Figure 2. Recall that the ellipses represent positive-definite symmetric matrices. As the two eigenvalues of the input matrix approach each other, the input ellipse changes into a circle. A circle corresponds to a multiple of the identity matrix. A near-circle corresponds to a near-multiple of the identity matrix whose eigenvalues are nearly equal to the diagonal entries of the matrix. Therefore, the problem of approximately finding the eigenvalues is shown to be easy in that case. But notice what happens to the semi-axes of the ellipses. An iteration of QR (or LR) tilts the semi-axes less and less as the input ellipse gets closer to being a circle. The eigenvectors can only be known when the semi-axes are parallel to the x-axis and y-axis. The number of iterations needed to achieve near-parallelism increases without bound as the input ellipse becomes more circular. While it may be impossible to compute the [[Eigendecomposition of a matrix|eigendecomposition]] of an arbitrary symmetric matrix, it is always possible to perturb the matrix by an arbitrarily small amount and compute the eigendecomposition of the resulting matrix. In the case when the matrix is depicted as a near-circle, the matrix can be replaced with one whose depiction is a perfect circle. In that case, the matrix is a multiple of the identity matrix, and its eigendecomposition is immediate. Be aware though that the resulting [[eigenbasis]] can be quite far from the original eigenbasis. === Speeding up: Shifting and deflation === The slowdown when the ellipse gets more circular has a converse: It turns out that when the ellipse gets more stretched - and less circular - then the rotation of the ellipse becomes faster. Such a stretch can be induced when the matrix <math>M</math> which the ellipse represents gets replaced with <math>M-\lambda I</math> where <math>\lambda</math> is approximately the smallest eigenvalue of <math>M</math>. In this case, the ratio of the two semi-axes of the ellipse approaches <math>\infty</math>. In higher dimensions, shifting like this makes the length of the smallest semi-axis of an ellipsoid small relative to the other semi-axes, which speeds up convergence to the smallest eigenvalue, but does not speed up convergence to the other eigenvalues. This becomes useless when the smallest eigenvalue is fully determined, so the matrix must then be ''deflated'', which simply means removing its last row and column. The issue with the unstable fixed point also needs to be addressed. The shifting heuristic is often designed to deal with this problem as well: Practical shifts are often discontinuous and randomised. Wilkinson's shift—which is well-suited for symmetric matrices like the ones we're visualising—is in particular discontinuous. == The implicit QR algorithm == In modern computational practice, the QR algorithm is performed in an implicit version which makes the use of multiple shifts easier to introduce.<ref name="golubvanloan" /> The matrix is first brought to upper Hessenberg form <math>A_0=QAQ^{\mathsf{T}}</math> as in the explicit version; then, at each step, the first column of <math>A_k</math> is transformed via a small-size Householder similarity transformation to the first column of <math>p(A_k)</math> {{Clarify|date=October 2022}} (or <math>p(A_k)e_1</math>), where <math>p(A_k)</math>, of degree <math>r</math>, is the polynomial that defines the shifting strategy (often <math>p(x)=(x-\lambda)(x-\bar\lambda)</math>, where <math>\lambda</math> and <math>\bar\lambda</math> are the two eigenvalues of the trailing <math>2 \times 2</math> principal submatrix of <math>A_k</math>, the so-called ''implicit double-shift''). Then successive Householder transformations of size <math>r+1</math> are performed in order to return the working matrix <math>A_k</math> to upper Hessenberg form. This operation is known as ''bulge chasing'', due to the peculiar shape of the non-zero entries of the matrix along the steps of the algorithm. As in the first version, deflation is performed as soon as one of the sub-diagonal entries of <math>A_k</math> is sufficiently small. ===Renaming proposal=== Since in the modern implicit version of the procedure no [[QR decomposition]]s are explicitly performed, some authors, for instance Watkins,<ref>{{cite book |last=Watkins |first=David S. |title=The Matrix Eigenvalue Problem: GR and Krylov Subspace Methods |publisher=SIAM |location=Philadelphia, PA |year=2007 |isbn=978-0-89871-641-2 }}</ref> suggested changing its name to ''Francis algorithm''. [[Gene H. Golub|Golub]] and [[Charles F. Van Loan|Van Loan]] use the term ''Francis QR step''. == Interpretation and convergence == The QR algorithm can be seen as a more sophisticated variation of the basic [[Power iteration|"power" eigenvalue algorithm]]. Recall that the power algorithm repeatedly multiplies ''A'' times a single vector, normalizing after each iteration. The vector converges to an eigenvector of the largest eigenvalue. Instead, the QR algorithm works with a complete basis of vectors, using QR decomposition to renormalize (and orthogonalize). For a symmetric matrix ''A'', upon convergence, ''AQ'' = ''QΛ'', where ''Λ'' is the [[diagonal matrix]] of eigenvalues to which ''A'' converged, and where ''Q'' is a composite of all the orthogonal similarity transforms required to get there. Thus the columns of ''Q'' are the eigenvectors. == History == The QR algorithm was preceded by the ''LR algorithm'', which uses the [[LU decomposition]] instead of the QR decomposition. The QR algorithm is more stable, so the LR algorithm is rarely used nowadays. However, it represents an important step in the development of the QR algorithm. The LR algorithm was developed in the early 1950s by [[Heinz Rutishauser]], who worked at that time as a research assistant of [[Eduard Stiefel]] at [[ETH Zurich]]. Stiefel suggested that Rutishauser use the sequence of moments ''y''<sub>0</sub><sup>T</sup> ''A''<sup>''k''</sup> ''x''<sub>0</sub>, ''k'' = 0, 1, ... (where ''x''<sub>0</sub> and ''y''<sub>0</sub> are arbitrary vectors) to find the eigenvalues of ''A''. Rutishauser took an algorithm of [[Alexander Aitken]] for this task and developed it into the ''quotient–difference algorithm'' or ''qd algorithm''. After arranging the computation in a suitable shape, he discovered that the qd algorithm is in fact the iteration ''A''<sub>''k''</sub> = ''L''<sub>''k''</sub>''U''<sub>''k''</sub> (LU decomposition), ''A''<sub>''k''+1</sub> = ''U''<sub>''k''</sub>''L''<sub>''k''</sub>, applied on a tridiagonal matrix, from which the LR algorithm follows.<ref>{{Citation | last1=Parlett | first1=Beresford N. | last2=Gutknecht | first2=Martin H. | title=From qd to LR, or, how were the qd and LR algorithms discovered? | doi=10.1093/imanum/drq003 | year=2011 | journal=IMA Journal of Numerical Analysis | issn=0272-4979 | volume=31 | issue=3 | pages=741–754| hdl=20.500.11850/159536 | url=http://doc.rero.ch/record/299139/files/drq003.pdf }}</ref> == Other variants == One variant of the ''QR algorithm'', ''the Golub-Kahan-Reinsch'' algorithm starts with reducing a general matrix into a bidiagonal one.<ref>Bochkanov Sergey Anatolyevich. ALGLIB User Guide - General Matrix operations - Singular value decomposition . ALGLIB Project. 2010-12-11. URL:[http://www.alglib.net/matrixops/general/svd.php] Accessed: 2010-12-11. (Archived by WebCite at https://www.webcitation.org/5utO4iSnR?url=http://www.alglib.net/matrixops/general/svd.php</ref> This variant of the ''QR algorithm'' for the computation of [[singular values]] was first described by {{harvtxt|Golub|Kahan|1965}}. The [[LAPACK]] subroutine [http://www.netlib.org/lapack/double/dbdsqr.f DBDSQR] implements this [[iterative method]], with some modifications to cover the case where the singular values are very small {{harv|Demmel|Kahan|1990}}. Together with a first step using Householder reflections and, if appropriate, [[QR decomposition]], this forms the [http://www.netlib.org/lapack/double/dgesvd.f DGESVD] routine for the computation of the [[singular value decomposition]]. The QR algorithm can also be implemented in infinite dimensions with corresponding convergence results.<ref>{{cite journal |first1=Percy |last1=Deift |first2=Luenchau C. |last2=Li |first3=Carlos |last3=Tomei|title=Toda flows with infinitely many variables |journal=Journal of Functional Analysis |volume=64 | number=3| pages=358–402 |year=1985 |doi=10.1016/0022-1236(85)90065-5|doi-access=free }}</ref><ref>{{cite journal |first1=Matthew J. |last1=Colbrook |first2=Anders C.|last2=Hansen|title=On the infinite-dimensional QR algorithm |journal=Numerische Mathematik |volume=143 | number=1| pages=17–83 |year=2019 |doi=10.1007/s00211-019-01047-5|arxiv=2011.08172|doi-access=free }}</ref> == References == {{Reflist|30em}} == Sources == * {{Cite journal | last1=Demmel | first1=James | author1-link = James Demmel | last2=Kahan | first2=William | author2-link=William Kahan | title=Accurate singular values of bidiagonal matrices | doi=10.1137/0911052 | year=1990 | journal= SIAM Journal on Scientific and Statistical Computing| volume=11 | issue=5 | pages=873–912 | citeseerx=10.1.1.48.3740 }} * {{Cite journal | last1=Golub | first1=Gene H. | author1-link=Gene H. Golub | last2=Kahan | first2=William | author2-link=William Kahan | title=Calculating the singular values and pseudo-inverse of a matrix | jstor=2949777 | year=1965 | journal=Journal of the Society for Industrial and Applied Mathematics, Series B: Numerical Analysis | volume=2 | issue=2 | pages=205–224 | doi=10.1137/0702016 | bibcode=1965SJNA....2..205G }} == External links == * {{PlanetMath |urlname=eigenvalueproblem |title=Eigenvalue problem}} * [http://www-users.math.umn.edu/~olver/aims_/qr.pdf Notes on orthogonal bases and the workings of the QR algorithm] by [[Peter J. Olver]] * [https://web.archive.org/web/20081209042103/http://math.fullerton.edu/mathews/n2003/QRMethodMod.html Module for the QR Method] * [https://github.com/nom-de-guerre/Matrices C++ Library] {{Numerical linear algebra}} {{DEFAULTSORT:Qr Algorithm}} [[Category:Numerical linear algebra]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Citation
(
edit
)
Template:Citation needed
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Clarify
(
edit
)
Template:Harv
(
edit
)
Template:Harvtxt
(
edit
)
Template:Math
(
edit
)
Template:Mvar
(
edit
)
Template:Numerical linear algebra
(
edit
)
Template:PlanetMath
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)