Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Orthogonal matrix
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Decompositions=== A number of important [[matrix decomposition]]s {{harv|Golub|Van Loan|1996}} involve orthogonal matrices, including especially: ;[[QR decomposition|{{mvar|QR}} decomposition]] : {{math|1=''M'' = ''QR''}}, {{mvar|Q}} orthogonal, {{mvar|R}} upper triangular ;[[Singular value decomposition]] : {{math|1=''M'' = ''U''Σ''V''<sup>T</sup>}}, {{mvar|U}} and {{mvar|V}} orthogonal, {{math|Σ}} diagonal matrix ;[[Eigendecomposition of a matrix|Eigendecomposition of a symmetric matrix]] (decomposition according to the [[spectral theorem]]) : {{math|1=''S'' = ''Q''Λ''Q''<sup>T</sup>}}, {{mvar|S}} symmetric, {{mvar|Q}} orthogonal, {{math|Λ}} diagonal ;[[Polar decomposition]] : {{math|1=''M'' = ''QS''}}, {{mvar|Q}} orthogonal, {{mvar|S}} symmetric positive-semidefinite ====Examples==== Consider an [[overdetermined system of linear equations]], as might occur with repeated measurements of a physical phenomenon to compensate for experimental errors. Write {{math|1=''A'''''x''' = '''b'''}}, where {{mvar|A}} is {{math|''m'' × ''n''}}, {{math|''m'' > ''n''}}. A {{mvar|QR}} decomposition reduces {{mvar|A}} to upper triangular {{mvar|R}}. For example, if {{mvar|A}} is {{nowrap|5 × 3}} then {{mvar|R}} has the form <math display="block">R = \begin{bmatrix} \cdot & \cdot & \cdot \\ 0 & \cdot & \cdot \\ 0 & 0 & \cdot \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}.</math> The [[linear least squares (mathematics)|linear least squares]] problem is to find the {{math|'''x'''}} that minimizes {{math|{{norm|''A'''''x''' − '''b'''}}}}, which is equivalent to projecting {{math|'''b'''}} to the subspace spanned by the columns of {{mvar|A}}. Assuming the columns of {{mvar|A}} (and hence {{mvar|R}}) are independent, the projection solution is found from {{math|1=''A''<sup>T</sup>''A'''''x''' = ''A''<sup>T</sup>'''b'''}}. Now {{math|''A''<sup>T</sup>''A''}} is square ({{math|''n'' × ''n''}}) and invertible, and also equal to {{math|''R''<sup>T</sup>''R''}}. But the lower rows of zeros in {{mvar|R}} are superfluous in the product, which is thus already in lower-triangular upper-triangular factored form, as in [[Gaussian elimination]] ([[Cholesky decomposition]]). Here orthogonality is important not only for reducing {{math|1=''A''<sup>T</sup>''A'' = (''R''<sup>T</sup>''Q''<sup>T</sup>)''QR''}} to {{math|''R''<sup>T</sup>''R''}}, but also for allowing solution without magnifying numerical problems. In the case of a linear system which is underdetermined, or an otherwise non-[[invertible matrix]], singular value decomposition (SVD) is equally useful. With {{mvar|A}} factored as {{math|''U''Σ''V''<sup>T</sup>}}, a satisfactory solution uses the Moore-Penrose [[pseudoinverse]], {{math|''V''Σ<sup>+</sup>''U''<sup>T</sup>}}, where {{math|Σ<sup>+</sup>}} merely replaces each non-zero diagonal entry with its reciprocal. Set {{math|'''x'''}} to {{math|''V''Σ<sup>+</sup>''U''<sup>T</sup>'''b'''}}. The case of a square invertible matrix also holds interest. Suppose, for example, that {{mvar|A}} is a {{nowrap|3 × 3}} rotation matrix which has been computed as the composition of numerous twists and turns. Floating point does not match the mathematical ideal of real numbers, so {{mvar|A}} has gradually lost its true orthogonality. A [[Gram–Schmidt process]] could [[orthogonalization|orthogonalize]] the columns, but it is not the most reliable, nor the most efficient, nor the most invariant method. The [[polar decomposition]] factors a matrix into a pair, one of which is the unique ''closest'' orthogonal matrix to the given matrix, or one of the closest if the given matrix is singular. (Closeness can be measured by any [[matrix norm]] invariant under an orthogonal change of basis, such as the spectral norm or the Frobenius norm.) For a near-orthogonal matrix, rapid convergence to the orthogonal factor can be achieved by a "[[Newton's method]]" approach due to {{harvtxt|Higham|1986}} ([[#CITEREFHigham1990|1990]]), repeatedly averaging the matrix with its inverse transpose. {{harvtxt|Dubrulle|1999}} has published an accelerated method with a convenient convergence test. For example, consider a non-orthogonal matrix for which the simple averaging algorithm takes seven steps <math display="block">\begin{bmatrix}3 & 1\\7 & 5\end{bmatrix} \rightarrow \begin{bmatrix}1.8125 & 0.0625\\3.4375 & 2.6875\end{bmatrix} \rightarrow \cdots \rightarrow \begin{bmatrix}0.8 & -0.6\\0.6 & 0.8\end{bmatrix}</math> and which acceleration trims to two steps (with {{mvar|γ}} = 0.353553, 0.565685). <math display="block">\begin{bmatrix}3 & 1\\7 & 5\end{bmatrix} \rightarrow \begin{bmatrix}1.41421 & -1.06066\\1.06066 & 1.41421\end{bmatrix} \rightarrow \begin{bmatrix}0.8 & -0.6\\0.6 & 0.8\end{bmatrix}</math> Gram-Schmidt yields an inferior solution, shown by a Frobenius distance of 8.28659 instead of the minimum 8.12404. <math display="block">\begin{bmatrix}3 & 1\\7 & 5\end{bmatrix} \rightarrow \begin{bmatrix}0.393919 & -0.919145\\0.919145 & 0.393919\end{bmatrix}</math>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)