Template:Short description Template:For Template:More footnotes needed In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors.
One way to express this is <math display="block">Q^\mathrm{T} Q = Q Q^\mathrm{T} = I,</math> where Template:Math is the transpose of Template:Mvar and Template:Mvar is the identity matrix.
This leads to the equivalent characterization: a matrix Template:Mvar is orthogonal if its transpose is equal to its inverse: <math display="block">Q^\mathrm{T}=Q^{-1},</math> where Template:Math is the inverse of Template:Mvar.
An orthogonal matrix Template:Mvar is necessarily invertible (with inverse Template:Math), unitary (Template:Math), where Template:Math is the Hermitian adjoint (conjugate transpose) of Template:Mvar, and therefore normal (Template:Math) over the real numbers. The determinant of any orthogonal matrix is either +1 or −1. As a linear transformation, an orthogonal matrix preserves the inner product of vectors, and therefore acts as an isometry of Euclidean space, such as a rotation, reflection or rotoreflection. In other words, it is a unitary transformation.
The set of Template:Math orthogonal matrices, under multiplication, forms the group Template:Math, known as the orthogonal group. The subgroup Template:Math consisting of orthogonal matrices with determinant +1 is called the special orthogonal group, and each of its elements is a special orthogonal matrix. As a linear transformation, every special orthogonal matrix acts as a rotation.
OverviewEdit
An orthogonal matrix is the real specialization of a unitary matrix, and thus always a normal matrix. Although we consider only real matrices here, the definition can be used for matrices with entries from any field. However, orthogonal matrices arise naturally from dot products, and for matrices of complex numbers that leads instead to the unitary requirement. Orthogonal matrices preserve the dot product,<ref>"Paul's online math notes"Template:Full citation needed, Paul Dawkins, Lamar University, 2008. Theorem 3(c)</ref> so, for vectors Template:Math and Template:Math in an Template:Mvar-dimensional real Euclidean space <math display="block">{\mathbf u} \cdot {\mathbf v} = \left(Q {\mathbf u}\right) \cdot \left(Q {\mathbf v}\right) </math> where Template:Mvar is an orthogonal matrix. To see the inner product connection, consider a vector Template:Math in an Template:Mvar-dimensional real Euclidean space. Written with respect to an orthonormal basis, the squared length of Template:Math is Template:Math. If a linear transformation, in matrix form Template:Math, preserves vector lengths, then <math display="block">{\mathbf v}^\mathrm{T}{\mathbf v} = (Q{\mathbf v})^\mathrm{T}(Q{\mathbf v}) = {\mathbf v}^\mathrm{T} Q^\mathrm{T} Q {\mathbf v} .</math>
Thus finite-dimensional linear isometries—rotations, reflections, and their combinations—produce orthogonal matrices. The converse is also true: orthogonal matrices imply orthogonal transformations. However, linear algebra includes orthogonal transformations between spaces which may be neither finite-dimensional nor of the same dimension, and these have no orthogonal matrix equivalent.
Orthogonal matrices are important for a number of reasons, both theoretical and practical. The Template:Math orthogonal matrices form a group under matrix multiplication, the orthogonal group denoted by Template:Math, which—with its subgroups—is widely used in mathematics and the physical sciences. For example, the point group of a molecule is a subgroup of O(3). Because floating point versions of orthogonal matrices have advantageous properties, they are key to many algorithms in numerical linear algebra, such as [[QR decomposition|Template:Mvar decomposition]]. As another example, with appropriate normalization the discrete cosine transform (used in MP3 compression) is represented by an orthogonal matrix.
ExamplesEdit
Below are a few examples of small orthogonal matrices and possible interpretations.
- <math>
\begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix}</math> (identity transformation)
- <math>
\begin{bmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \\ \end{bmatrix}</math> (rotation about the origin)
- <math>
\begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix}</math> (reflection across x-axis)
- <math>
\begin{bmatrix} 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{bmatrix}</math> (permutation of coordinate axes)
Elementary constructionsEdit
Lower dimensionsEdit
The simplest orthogonal matrices are the Template:Nowrap matrices [1] and [−1], which we can interpret as the identity and a reflection of the real line across the origin.
The Template:Nowrap matrices have the form <math display="block">\begin{bmatrix} p & t\\ q & u \end{bmatrix},</math> which orthogonality demands satisfy the three equations <math display="block">\begin{align} 1 & = p^2+t^2, \\ 1 & = q^2+u^2, \\ 0 & = pq+tu. \end{align}</math>
In consideration of the first equation, without loss of generality let Template:Math, Template:Math; then either Template:Math, Template:Math or Template:Math, Template:Math. We can interpret the first case as a rotation by Template:Mvar (where Template:Math is the identity), and the second as a reflection across a line at an angle of Template:Math.
<math display="block"> \begin{bmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \\ \end{bmatrix}\text{ (rotation), }\qquad \begin{bmatrix} \cos \theta & \sin \theta \\ \sin \theta & -\cos \theta \\ \end{bmatrix}\text{ (reflection)} </math>
The special case of the reflection matrix with Template:Math generates a reflection about the line at 45° given by Template:Math and therefore exchanges Template:Mvar and Template:Mvar; it is a permutation matrix, with a single 1 in each column and row (and otherwise 0): <math display="block">\begin{bmatrix} 0 & 1\\ 1 & 0 \end{bmatrix}.</math>
The identity is also a permutation matrix.
A reflection is its own inverse, which implies that a reflection matrix is symmetric (equal to its transpose) as well as orthogonal. The product of two rotation matrices is a rotation matrix, and the product of two reflection matrices is also a rotation matrix.
Higher dimensionsEdit
Regardless of the dimension, it is always possible to classify orthogonal matrices as purely rotational or not, but for Template:Nowrap matrices and larger the non-rotational matrices can be more complicated than reflections. For example, <math display="block"> \begin{bmatrix} -1 & 0 & 0\\ 0 & -1 & 0\\ 0 & 0 & -1 \end{bmatrix}\text{ and } \begin{bmatrix} 0 & -1 & 0\\ 1 & 0 & 0\\ 0 & 0 & -1 \end{bmatrix}</math>
represent an inversion through the origin and a rotoinversion, respectively, about the Template:Math-axis.
Rotations become more complicated in higher dimensions; they can no longer be completely characterized by one angle, and may affect more than one planar subspace. It is common to describe a Template:Nowrap rotation matrix in terms of an axis and angle, but this only works in three dimensions. Above three dimensions two or more angles are needed, each associated with a plane of rotation.
However, we have elementary building blocks for permutations, reflections, and rotations that apply in general.
PrimitivesEdit
The most elementary permutation is a transposition, obtained from the identity matrix by exchanging two rows. Any Template:Math permutation matrix can be constructed as a product of no more than Template:Math transpositions.
A Householder reflection is constructed from a non-null vector Template:Math as <math display="block">Q = I - 2 \frac{{\mathbf v}{\mathbf v}^\mathrm{T}}{{\mathbf v}^\mathrm{T}{\mathbf v}} .</math>
Here the numerator is a symmetric matrix while the denominator is a number, the squared magnitude of Template:Math. This is a reflection in the hyperplane perpendicular to Template:Math (negating any vector component parallel to Template:Math). If Template:Math is a unit vector, then Template:Math suffices. A Householder reflection is typically used to simultaneously zero the lower part of a column. Any orthogonal matrix of size Template:Nowrap can be constructed as a product of at most Template:Mvar such reflections.
A Givens rotation acts on a two-dimensional (planar) subspace spanned by two coordinate axes, rotating by a chosen angle. It is typically used to zero a single subdiagonal entry. Any rotation matrix of size Template:Math can be constructed as a product of at most Template:Math such rotations. In the case of Template:Nowrap matrices, three such rotations suffice; and by fixing the sequence we can thus describe all Template:Nowrap rotation matrices (though not uniquely) in terms of the three angles used, often called Euler angles.
A Jacobi rotation has the same form as a Givens rotation, but is used to zero both off-diagonal entries of a Template:Nowrap symmetric submatrix.
PropertiesEdit
Matrix propertiesEdit
A real square matrix is orthogonal if and only if its columns form an orthonormal basis of the Euclidean space Template:Math with the ordinary Euclidean dot product, which is the case if and only if its rows form an orthonormal basis of Template:Math. It might be tempting to suppose a matrix with orthogonal (not orthonormal) columns would be called an orthogonal matrix, but such matrices have no special interest and no special name; they only satisfy Template:Math, with Template:Mvar a diagonal matrix.
The determinant of any orthogonal matrix is +1 or −1. This follows from basic facts about determinants, as follows: <math display="block">1=\det(I)=\det\left(Q^\mathrm{T}Q\right)=\det\left(Q^\mathrm{T}\right)\det(Q)=\bigl(\det(Q)\bigr)^2 .</math>
The converse is not true; having a determinant of ±1 is no guarantee of orthogonality, even with orthogonal columns, as shown by the following counterexample. <math display="block">\begin{bmatrix} 2 & 0 \\ 0 & \frac{1}{2} \end{bmatrix}</math>
With permutation matrices the determinant matches the signature, being +1 or −1 as the parity of the permutation is even or odd, for the determinant is an alternating function of the rows.
Stronger than the determinant restriction is the fact that an orthogonal matrix can always be diagonalized over the complex numbers to exhibit a full set of eigenvalues, all of which must have (complex) modulus 1.
Group propertiesEdit
The inverse of every orthogonal matrix is again orthogonal, as is the matrix product of two orthogonal matrices. In fact, the set of all Template:Math orthogonal matrices satisfies all the axioms of a group. It is a compact Lie group of dimension Template:Math, called the orthogonal group and denoted by Template:Math.
The orthogonal matrices whose determinant is +1 form a path-connected normal subgroup of Template:Math of index 2, the special orthogonal group Template:Math of rotations. The quotient group Template:Math is isomorphic to Template:Math, with the projection map choosing [+1] or [−1] according to the determinant. Orthogonal matrices with determinant −1 do not include the identity, and so do not form a subgroup but only a coset; it is also (separately) connected. Thus each orthogonal group falls into two pieces; and because the projection map splits, Template:Math is a semidirect product of Template:Math by Template:Math. In practical terms, a comparable statement is that any orthogonal matrix can be produced by taking a rotation matrix and possibly negating one of its columns, as we saw with Template:Nowrap matrices. If Template:Mvar is odd, then the semidirect product is in fact a direct product, and any orthogonal matrix can be produced by taking a rotation matrix and possibly negating all of its columns. This follows from the property of determinants that negating a column negates the determinant, and thus negating an odd (but not even) number of columns negates the determinant.
Now consider Template:Math orthogonal matrices with bottom right entry equal to 1. The remainder of the last column (and last row) must be zeros, and the product of any two such matrices has the same form. The rest of the matrix is an Template:Math orthogonal matrix; thus Template:Math is a subgroup of Template:Math (and of all higher groups).
<math display="block">\begin{bmatrix}
& & & 0\\ & \mathrm{O}(n) & & \vdots\\ & & & 0\\ 0 & \cdots & 0 & 1
\end{bmatrix}</math>
Since an elementary reflection in the form of a Householder matrix can reduce any orthogonal matrix to this constrained form, a series of such reflections can bring any orthogonal matrix to the identity; thus an orthogonal group is a reflection group. The last column can be fixed to any unit vector, and each choice gives a different copy of Template:Math in Template:Math; in this way Template:Math is a bundle over the unit sphere Template:Math with fiber Template:Math.
Similarly, Template:Math is a subgroup of Template:Math; and any special orthogonal matrix can be generated by Givens plane rotations using an analogous procedure. The bundle structure persists: Template:Math. A single rotation can produce a zero in the first row of the last column, and series of Template:Math rotations will zero all but the last row of the last column of an Template:Math rotation matrix. Since the planes are fixed, each rotation has only one degree of freedom, its angle. By induction, Template:Math therefore has <math display="block">(n-1) + (n-2) + \cdots + 1 = \frac{n(n-1)}{2}</math> degrees of freedom, and so does Template:Math.
Permutation matrices are simpler still; they form, not a Lie group, but only a finite group, the order [[factorial|Template:Math]] symmetric group Template:Math. By the same kind of argument, Template:Math is a subgroup of Template:Math. The even permutations produce the subgroup of permutation matrices of determinant +1, the order Template:Math alternating group.
Canonical formEdit
More broadly, the effect of any orthogonal matrix separates into independent actions on orthogonal two-dimensional subspaces. That is, if Template:Mvar is special orthogonal then one can always find an orthogonal matrix Template:Mvar, a (rotational) change of basis, that brings Template:Mvar into block diagonal form:
<math display="block">P^\mathrm{T}QP = \begin{bmatrix} R_1 & & \\ & \ddots & \\ & & R_k \end{bmatrix}\ (n\text{ even}), \ P^\mathrm{T}QP = \begin{bmatrix} R_1 & & & \\ & \ddots & & \\ & & R_k & \\ & & & 1 \end{bmatrix}\ (n\text{ odd}).</math>
where the matrices Template:Math are Template:Nowrap rotation matrices, and with the remaining entries zero. Exceptionally, a rotation block may be diagonal, Template:Math. Thus, negating one column if necessary, and noting that a Template:Nowrap reflection diagonalizes to a +1 and −1, any orthogonal matrix can be brought to the form <math display="block">P^\mathrm{T}QP = \begin{bmatrix} \begin{matrix}R_1 & & \\ & \ddots & \\ & & R_k\end{matrix} & 0 \\ 0 & \begin{matrix}\pm 1 & & \\ & \ddots & \\ & & \pm 1\end{matrix} \\ \end{bmatrix},</math>
The matrices Template:Math give conjugate pairs of eigenvalues lying on the unit circle in the complex plane; so this decomposition confirms that all eigenvalues have absolute value 1. If Template:Mvar is odd, there is at least one real eigenvalue, +1 or −1; for a Template:Nowrap rotation, the eigenvector associated with +1 is the rotation axis.
Lie algebraEdit
Suppose the entries of Template:Mvar are differentiable functions of Template:Mvar, and that Template:Math gives Template:Math. Differentiating the orthogonality condition <math display="block">Q^\mathrm{T} Q = I </math> yields <math display="block">\dot{Q}^\mathrm{T} Q + Q^\mathrm{T} \dot{Q} = 0</math>
Evaluation at Template:Math (Template:Math) then implies <math display="block">\dot{Q}^\mathrm{T} = -\dot{Q} .</math>
In Lie group terms, this means that the Lie algebra of an orthogonal matrix group consists of skew-symmetric matrices. Going the other direction, the matrix exponential of any skew-symmetric matrix is an orthogonal matrix (in fact, special orthogonal).
For example, the three-dimensional object physics calls angular velocity is a differential rotation, thus a vector in the Lie algebra <math>\mathfrak{so}(3)</math> tangent to Template:Math. Given Template:Math, with Template:Math being a unit vector, the correct skew-symmetric matrix form of Template:Mvar is <math display="block"> \Omega = \begin{bmatrix} 0 & -z\theta & y\theta \\ z\theta & 0 & -x\theta \\ -y\theta & x\theta & 0 \end{bmatrix} .</math>
The exponential of this is the orthogonal matrix for rotation around axis Template:Math by angle Template:Mvar; setting Template:Math, Template:Math, <math display="block">\exp(\Omega) = \begin{bmatrix} 1 - 2s^2 + 2x^2 s^2 & 2xy s^2 - 2z sc & 2xz s^2 + 2y sc\\ 2xy s^2 + 2z sc & 1 - 2s^2 + 2y^2 s^2 & 2yz s^2 - 2x sc\\ 2xz s^2 - 2y sc & 2yz s^2 + 2x sc & 1 - 2s^2 + 2z^2 s^2 \end{bmatrix}.</math>
Numerical linear algebraEdit
BenefitsEdit
Numerical analysis takes advantage of many of the properties of orthogonal matrices for numerical linear algebra, and they arise naturally. For example, it is often desirable to compute an orthonormal basis for a space, or an orthogonal change of bases; both take the form of orthogonal matrices. Having determinant ±1 and all eigenvalues of magnitude 1 is of great benefit for numeric stability. One implication is that the condition number is 1 (which is the minimum), so errors are not magnified when multiplying with an orthogonal matrix. Many algorithms use orthogonal matrices like Householder reflections and Givens rotations for this reason. It is also helpful that, not only is an orthogonal matrix invertible, but its inverse is available essentially free, by exchanging indices.
Permutations are essential to the success of many algorithms, including the workhorse Gaussian elimination with partial pivoting (where permutations do the pivoting). However, they rarely appear explicitly as matrices; their special form allows more efficient representation, such as a list of Template:Mvar indices.
Likewise, algorithms using Householder and Givens matrices typically use specialized methods of multiplication and storage. For example, a Givens rotation affects only two rows of a matrix it multiplies, changing a full multiplication of order Template:Math to a much more efficient order Template:Mvar. When uses of these reflections and rotations introduce zeros in a matrix, the space vacated is enough to store sufficient data to reproduce the transform, and to do so robustly. (Following Template:Harvtxt, we do not store a rotation angle, which is both expensive and badly behaved.)
DecompositionsEdit
A number of important matrix decompositions Template:Harv involve orthogonal matrices, including especially:
- [[QR decomposition|Template:Mvar decomposition]]
- Template:Math, Template:Mvar orthogonal, Template:Mvar upper triangular
- Singular value decomposition
- Template:Math, Template:Mvar and Template:Mvar orthogonal, Template:Math diagonal matrix
- Eigendecomposition of a symmetric matrix (decomposition according to the spectral theorem)
- Template:Math, Template:Mvar symmetric, Template:Mvar orthogonal, Template:Math diagonal
- Polar decomposition
- Template:Math, Template:Mvar orthogonal, Template:Mvar symmetric positive-semidefinite
ExamplesEdit
Consider an overdetermined system of linear equations, as might occur with repeated measurements of a physical phenomenon to compensate for experimental errors. Write Template:Math, where Template:Mvar is Template:Math, Template:Math. A Template:Mvar decomposition reduces Template:Mvar to upper triangular Template:Mvar. For example, if Template:Mvar is Template:Nowrap then Template:Mvar has the form <math display="block">R = \begin{bmatrix} \cdot & \cdot & \cdot \\ 0 & \cdot & \cdot \\ 0 & 0 & \cdot \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}.</math>
The linear least squares problem is to find the Template:Math that minimizes Template:Math, which is equivalent to projecting Template:Math to the subspace spanned by the columns of Template:Mvar. Assuming the columns of Template:Mvar (and hence Template:Mvar) are independent, the projection solution is found from Template:Math. Now Template:Math is square (Template:Math) and invertible, and also equal to Template:Math. But the lower rows of zeros in Template:Mvar are superfluous in the product, which is thus already in lower-triangular upper-triangular factored form, as in Gaussian elimination (Cholesky decomposition). Here orthogonality is important not only for reducing Template:Math to Template:Math, but also for allowing solution without magnifying numerical problems.
In the case of a linear system which is underdetermined, or an otherwise non-invertible matrix, singular value decomposition (SVD) is equally useful. With Template:Mvar factored as Template:Math, a satisfactory solution uses the Moore-Penrose pseudoinverse, Template:Math, where Template:Math merely replaces each non-zero diagonal entry with its reciprocal. Set Template:Math to Template:Math.
The case of a square invertible matrix also holds interest. Suppose, for example, that Template:Mvar is a Template:Nowrap rotation matrix which has been computed as the composition of numerous twists and turns. Floating point does not match the mathematical ideal of real numbers, so Template:Mvar has gradually lost its true orthogonality. A Gram–Schmidt process could orthogonalize the columns, but it is not the most reliable, nor the most efficient, nor the most invariant method. The polar decomposition factors a matrix into a pair, one of which is the unique closest orthogonal matrix to the given matrix, or one of the closest if the given matrix is singular. (Closeness can be measured by any matrix norm invariant under an orthogonal change of basis, such as the spectral norm or the Frobenius norm.) For a near-orthogonal matrix, rapid convergence to the orthogonal factor can be achieved by a "Newton's method" approach due to Template:Harvtxt (1990), repeatedly averaging the matrix with its inverse transpose. Template:Harvtxt has published an accelerated method with a convenient convergence test.
For example, consider a non-orthogonal matrix for which the simple averaging algorithm takes seven steps <math display="block">\begin{bmatrix}3 & 1\\7 & 5\end{bmatrix} \rightarrow \begin{bmatrix}1.8125 & 0.0625\\3.4375 & 2.6875\end{bmatrix} \rightarrow \cdots \rightarrow \begin{bmatrix}0.8 & -0.6\\0.6 & 0.8\end{bmatrix}</math> and which acceleration trims to two steps (with Template:Mvar = 0.353553, 0.565685).
<math display="block">\begin{bmatrix}3 & 1\\7 & 5\end{bmatrix} \rightarrow \begin{bmatrix}1.41421 & -1.06066\\1.06066 & 1.41421\end{bmatrix} \rightarrow \begin{bmatrix}0.8 & -0.6\\0.6 & 0.8\end{bmatrix}</math>
Gram-Schmidt yields an inferior solution, shown by a Frobenius distance of 8.28659 instead of the minimum 8.12404.
<math display="block">\begin{bmatrix}3 & 1\\7 & 5\end{bmatrix} \rightarrow \begin{bmatrix}0.393919 & -0.919145\\0.919145 & 0.393919\end{bmatrix}</math>
RandomizationEdit
Some numerical applications, such as Monte Carlo methods and exploration of high-dimensional data spaces, require generation of uniformly distributed random orthogonal matrices. In this context, "uniform" is defined in terms of Haar measure, which essentially requires that the distribution not change if multiplied by any freely chosen orthogonal matrix. Orthogonalizing matrices with independent uniformly distributed random entries does not result in uniformly distributed orthogonal matricesTemplate:Citation needed, but the [[QR decomposition|Template:Mvar decomposition]] of independent normally distributed random entries does, as long as the diagonal of Template:Mvar contains only positive entries Template:Harv. Template:Harvtxt replaced this with a more efficient idea that Template:Harvtxt later generalized as the "subgroup algorithm" (in which form it works just as well for permutations and rotations). To generate an Template:Math orthogonal matrix, take an Template:Math one and a uniformly distributed unit vector of dimension Template:Nowrap. Construct a Householder reflection from the vector, then apply it to the smaller matrix (embedded in the larger size with a 1 at the bottom right corner).
Nearest orthogonal matrixEdit
The problem of finding the orthogonal matrix Template:Mvar nearest a given matrix Template:Mvar is related to the Orthogonal Procrustes problem. There are several different ways to get the unique solution, the simplest of which is taking the singular value decomposition of Template:Mvar and replacing the singular values with ones. Another method expresses the Template:Mvar explicitly but requires the use of a matrix square root:<ref>"Finding the Nearest Orthonormal Matrix", Berthold K.P. Horn, MIT.</ref> <math display="block">Q = M \left(M^\mathrm{T} M\right)^{-\frac 1 2}</math>
This may be combined with the Babylonian method for extracting the square root of a matrix to give a recurrence which converges to an orthogonal matrix quadratically: <math display="block">Q_{n + 1} = 2 M \left(Q_n^{-1} M + M^\mathrm{T} Q_n\right)^{-1}</math> where Template:Math.
These iterations are stable provided the condition number of Template:Mvar is less than three.<ref>"Newton's Method for the Matrix Square Root" Template:Webarchive, Nicholas J. Higham, Mathematics of Computation, Volume 46, Number 174, 1986.</ref>
Using a first-order approximation of the inverse and the same initialization results in the modified iteration:
<math display="block">N_{n} = Q_n^\mathrm{T} Q_n</math> <math display="block">P_{n} = \frac 1 2 Q_n N_{n}</math> <math display="block">Q_{n + 1} = 2 Q_n + P_n N_n - 3 P_n</math>
Spin and pinEdit
A subtle technical problem afflicts some uses of orthogonal matrices. Not only are the group components with determinant +1 and −1 not connected to each other, even the +1 component, Template:Math, is not simply connected (except for SO(1), which is trivial). Thus it is sometimes advantageous, or even necessary, to work with a covering group of SO(n), the spin group, Template:Math. Likewise, Template:Math has covering groups, the pin groups, Pin(n). For Template:Math, Template:Math is simply connected and thus the universal covering group for Template:Math. By far the most famous example of a spin group is Template:Math, which is nothing but Template:Math, or the group of unit quaternions.
The Pin and Spin groups are found within Clifford algebras, which themselves can be built from orthogonal matrices.
Rectangular matricesEdit
{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}}
If Template:Mvar is not a square matrix, then the conditions Template:Math and Template:Math are not equivalent. The condition Template:Math says that the columns of Q are orthonormal. This can only happen if Template:Mvar is an Template:Math matrix with Template:Math (due to linear dependence). Similarly, Template:Math says that the rows of Template:Mvar are orthonormal, which requires Template:Math.
There is no standard terminology for these matrices. They are variously called "semi-orthogonal matrices", "orthonormal matrices", "orthogonal matrices", and sometimes simply "matrices with orthonormal rows/columns".
For the case Template:Math, matrices with orthonormal columns may be referred to as orthogonal k-frames and they are elements of the Stiefel manifold.
See alsoEdit
NotesEdit
ReferencesEdit
- Template:Citation
- Template:Citation
- Template:Citation
- Template:Citation
- Template:Citation [1]
- Template:Citation
- Template:Citation
- Template:Citation