Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Transpose
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Transpose of a matrix == {{hatnote|This article assumes that matrices are taken over a commutative ring. These results may not hold in the non-commutative case.}} === Definition === The transpose of a matrix {{math|'''A'''}}, denoted by {{math|'''A'''<sup>T</sup>}},<ref name="Whitelaw1991">{{cite book|author=T.A. Whitelaw|title=Introduction to Linear Algebra, 2nd edition|url=https://books.google.com/books?id=6M_kDzA7-qIC&q=transpose|date=1 April 1991|publisher=CRC Press|isbn=978-0-7514-0159-2}}</ref> {{math|{{sup|β€}}'''A'''}}, {{math|'''A'''{{sup|β€}}}}, <math>A^{\intercal}</math>,<ref>{{Cite web|last=|first=|date=|title=Transpose of a Matrix Product (ProofWiki)|url=https://proofwiki.org/wiki/Transpose_of_Matrix_Product|archive-url=|archive-date=|access-date=4 Feb 2021|website=ProofWiki}}</ref><ref>{{Cite web|date=|title=What is the best symbol for vector/matrix transpose?|url=https://tex.stackexchange.com/questions/30619/what-is-the-best-symbol-for-vector-matrix-transpose|archive-url=|archive-date=|access-date=4 Feb 2021|website=[[Stack Exchange]]}}</ref> {{math|'''Aβ²'''}},<ref>{{Cite web|last=Weisstein|first=Eric W.|title=Transpose|url=https://mathworld.wolfram.com/Transpose.html|access-date=2020-09-08|website=mathworld.wolfram.com|language=en}}</ref> {{math|'''A'''<sup>tr</sup>}}, {{math|<sup>t</sup>'''A'''}} or {{math|'''A'''<sup>t</sup>}}, may be constructed by any one of the following methods: #[[Reflection (mathematics)|Reflect]] {{math|'''A'''}} over its [[main diagonal]] (which runs from top-left to bottom-right) to obtain {{math|'''A'''<sup>T</sup>}} #Write the rows of {{math|'''A'''}} as the columns of {{math|'''A'''<sup>T</sup>}} #Write the columns of {{math|'''A'''}} as the rows of {{math|'''A'''<sup>T</sup>}} Formally, the {{mvar|i}}-th row, {{mvar|j}}-th column element of {{math|'''A'''<sup>T</sup>}} is the {{mvar|j}}-th row, {{mvar|i}}-th column element of {{math|'''A'''}}: :<math>\left[\mathbf{A}^\operatorname{T}\right]_{ij} = \left[\mathbf{A}\right]_{ji}.</math> If {{math|'''A'''}} is an {{math|{{nowrap|''m'' Γ ''n''}}}} matrix, then {{math|'''A'''<sup>T</sup>}} is an {{math|{{nowrap|''n'' Γ ''m''}}}} matrix. In the case of square matrices, {{math|'''A'''<sup>T</sup>}} may also denote the {{math|T}}th power of the matrix {{math|'''A'''}}. For avoiding a possible confusion, many authors use left upperscripts, that is, they denote the transpose as {{math|<sup>T</sup>'''A'''}}. An advantage of this notation is that no parentheses are needed when exponents are involved: as {{math|1=({{sup|T}}'''A'''){{sup|''n''}} = {{sup|T}}('''A'''{{sup|''n''}})}}, notation {{math|{{sup|T}}'''A'''{{sup|''n''}}}} is not ambiguous. In this article, this confusion is avoided by never using the symbol {{math|T}} as a [[variable (mathematics)|variable]] name. ==== Matrix definitions involving transposition ==== A square matrix whose transpose is equal to itself is called a ''[[symmetric matrix]]''; that is, {{math|'''A'''}} is symmetric if :<math>\mathbf{A}^{\operatorname{T}} = \mathbf{A}.</math> A square matrix whose transpose is equal to its negative is called a ''[[skew-symmetric matrix]]''; that is, {{math|'''A'''}} is skew-symmetric if :<math>\mathbf{A}^{\operatorname{T}} = -\mathbf{A}.</math> A square [[complex number|complex]] matrix whose transpose is equal to the matrix with every entry replaced by its [[complex conjugate]] (denoted here with an overline) is called a ''[[Hermitian matrix]]'' (equivalent to the matrix being equal to its [[conjugate transpose]]); that is, {{math|'''A'''}} is Hermitian if :<math>\mathbf{A}^{\operatorname{T}} = \overline{\mathbf{A}}.</math> A square [[complex number|complex]] matrix whose transpose is equal to the negation of its complex conjugate is called a ''[[skew-Hermitian matrix]]''; that is, {{math|'''A'''}} is skew-Hermitian if :<math>\mathbf{A}^{\operatorname{T}} = -\overline{\mathbf{A}}.</math> A square matrix whose transpose is equal to its [[Inverse matrix|inverse]] is called an ''[[orthogonal matrix]]''; that is, {{math|'''A'''}} is orthogonal if :<math>\mathbf{A}^{\operatorname{T}} = \mathbf{A}^{-1}.</math> A square complex matrix whose transpose is equal to its conjugate inverse is called a ''[[unitary matrix]]''; that is, {{math|'''A'''}} is unitary if :<math>\mathbf{A}^{\operatorname{T}} = \overline{\mathbf{A}^{-1}}.</math> === Examples === *<math>\begin{bmatrix} 1 & 2 \end{bmatrix}^{\operatorname{T}} = \, \begin{bmatrix} 1 \\ 2 \end{bmatrix} </math> *<math> \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}^{\operatorname{T}} = \begin{bmatrix} 1 & 3 \\ 2 & 4 \end{bmatrix} </math> * <math> \begin{bmatrix} 1 & 2 \\ 3 & 4 \\ 5 & 6 \end{bmatrix}^{\operatorname{T}} = \begin{bmatrix} 1 & 3 & 5\\ 2 & 4 & 6 \end{bmatrix} </math> === Properties === Let {{math|'''A'''}} and {{math|'''B'''}} be matrices and {{mvar|c}} be a [[Scalar (mathematics)|scalar]]. * <math>\left(\mathbf{A}^\operatorname{T} \right)^\operatorname{T} = \mathbf{A}.</math> *:The operation of taking the transpose is an [[Involution (mathematics)|involution]] (self-[[Inverse matrix|inverse]]). *<math>\left(\mathbf{A} + \mathbf{B}\right)^\operatorname{T} = \mathbf{A}^\operatorname{T} + \mathbf{B}^\operatorname{T}.</math> *:The transpose respects [[Matrix addition|addition]]. *<math>\left(c \mathbf{A}\right)^\operatorname{T} = c (\mathbf{A}^\operatorname{T}).</math> *:The transpose of a scalar is the same scalar. Together with the preceding property, this implies that the transpose is a [[linear map]] from the [[Vector space|space]] of {{math|{{nowrap|''m'' Γ ''n''}}}} matrices to the space of the {{math|{{nowrap|''n'' Γ ''m''}}}} matrices. *<math>\left(\mathbf{A B}\right)^\operatorname{T} = \mathbf{B}^\operatorname{T} \mathbf{A}^\operatorname{T}.</math> *:The order of the factors reverses. By induction, this result extends to the general case of multiple matrices, so *::{{math|('''A'''<sub>1</sub>'''A'''<sub>2</sub>...'''A'''<sub>''k''β1</sub>'''A'''<sub>''k''</sub>)<sup>T</sup> {{=}} '''A'''<sub>''k''</sub><sup>T</sup>'''A'''<sub>''k''β1</sub><sup>T</sup>β¦'''A'''<sub>2</sub><sup>T</sup>'''A'''<sub>1</sub><sup>T</sup>}}. *<math>\det \left(\mathbf{A}^\operatorname{T}\right) = \det(\mathbf{A}).</math> *:The [[determinant]] of a square matrix is the same as the determinant of its transpose. *The [[dot product]] of two column vectors {{math|'''a'''}} and {{math|'''b'''}} can be computed as the single entry of the matrix product<math display=block>\mathbf{a} \cdot \mathbf{b} = \mathbf{a}^{\operatorname{T}} \mathbf{b}.</math> *If {{math|'''A'''}} has only real entries, then {{math|'''A'''<sup>T</sup>'''A'''}} is a [[positive-semidefinite matrix]]. *<math> \left(\mathbf{A}^\operatorname{T} \right)^{-1} = \left(\mathbf{A}^{-1} \right)^\operatorname{T}.</math> *: The transpose of an invertible matrix is also invertible, and its inverse is the transpose of the inverse of the original matrix.<br>The notation {{math|'''A'''<sup>βT</sup>}} is sometimes used to represent either of these equivalent expressions. *If {{math|'''A'''}} is a square matrix, then its [[Eigenvalue, eigenvector and eigenspace|eigenvalues]] are equal to the eigenvalues of its transpose, since they share the same [[characteristic polynomial]]. *<math> \left(\mathbf A\mathbf a\right) \cdot \mathbf b =\mathbf a \cdot \mathbf \left(A^T\mathbf b\right)</math> for two column vectors <math> \mathbf a, \mathbf b </math> and the standard [[dot product]]. *Over any field <math>k</math>, a square matrix <math>\mathbf{A}</math> is [[matrix similarity|similar]] to <math>\mathbf{A}^\operatorname{T}</math>. *:This implies that <math>\mathbf{A}</math> and <math>\mathbf{A}^\operatorname{T}</math> have the same [[invariant factors]], which implies they share the same minimal polynomial, characteristic polynomial, and eigenvalues, among other properties. *:A proof of this property uses the following two observations. *:* Let <math>\mathbf{A}</math> and <math>\mathbf{B}</math> be <math>n\times n</math> matrices over some base field <math>k</math> and let <math>L</math> be a [[field extension]] of <math>k</math>. If <math>\mathbf{A}</math> and <math>\mathbf{B}</math> are similar as matrices over <math>L</math>, then they are similar over <math>k</math>. In particular this applies when <math>L</math> is the [[algebraic closure]] of <math>k</math>. *:*If <math>\mathbf{A}</math> is a matrix over an algebraically closed field in [[Jordan normal form]] with respect to some basis, then <math>\mathbf{A}</math> is similar to <math>\mathbf{A}^\operatorname{T}</math>. This further reduces to proving the same fact when <math>\mathbf{A}</math> is a single Jordan block, which is a straightforward exercise. === Products === If {{math|'''A'''}} is an {{math|{{nowrap|''m'' Γ ''n''}}}} matrix and {{math|'''A'''<sup>T</sup>}} is its transpose, then the result of [[matrix multiplication]] with these two matrices gives two square matrices: {{math|'''A A'''<sup>T</sup>}} is {{math|{{nowrap|''m'' Γ ''m''}}}} and {{math|'''A'''<sup>T</sup> '''A'''}} is {{math|{{nowrap|''n'' Γ ''n''}}}}. Furthermore, these products are [[symmetric matrices]]. Indeed, the matrix product {{math|'''A A'''<sup>T</sup>}} has entries that are the [[inner product]] of a row of {{math|'''A'''}} with a column of {{math|'''A'''<sup>T</sup>}}. But the columns of {{math|'''A'''<sup>T</sup>}} are the rows of {{math|'''A'''}}, so the entry corresponds to the inner product of two rows of {{math|'''A'''}}. If {{mvar|p<sub>i j</sub>}} is the entry of the product, it is obtained from rows {{mvar|i}} and {{mvar|j}} in {{math|'''A'''}}. The entry {{mvar|p<sub>j i</sub>}} is also obtained from these rows, thus {{math|''p''<sub>i j</sub> {{=}} ''p''<sub>j i</sub>}}, and the product matrix ({{mvar|p<sub>i j</sub>}}) is symmetric. Similarly, the product {{math|'''A'''<sup>T</sup> '''A'''}} is a symmetric matrix. A quick proof of the symmetry of {{math|'''A A'''<sup>T</sup>}} results from the fact that it is its own transpose: :<math>\left(\mathbf{A} \mathbf{A}^\operatorname{T}\right)^\operatorname{T} = \left(\mathbf{A}^\operatorname{T}\right)^\operatorname{T} \mathbf{A}^\operatorname{T}= \mathbf{A} \mathbf{A}^\operatorname{T} .</math><ref>[[Gilbert Strang]] (2006) ''Linear Algebra and its Applications'' 4th edition, page 51, Thomson [[Brooks/Cole]] {{ISBN|0-03-010567-6}}</ref> === Implementation of matrix transposition on computers === {{See also|In-place matrix transposition}} [[File:Row_and_column_major_order.svg|thumb|upright|Illustration of [[row- and column-major order]] ]] On a [[computer]], one can often avoid explicitly transposing a matrix in [[Random access memory|memory]] by simply accessing the same data in a different order. For example, [[software libraries]] for [[linear algebra]], such as [[BLAS]], typically provide options to specify that certain matrices are to be interpreted in transposed order to avoid the necessity of data movement. However, there remain a number of circumstances in which it is necessary or desirable to physically reorder a matrix in memory to its transposed ordering. For example, with a matrix stored in [[row- and column-major order|row-major order]], the rows of the matrix are contiguous in memory and the columns are discontiguous. If repeated operations need to be performed on the columns, for example in a [[fast Fourier transform]] algorithm, transposing the matrix in memory (to make the columns contiguous) may improve performance by increasing [[memory locality]]. Ideally, one might hope to transpose a matrix with minimal additional storage. This leads to the problem of transposing an ''n'' Γ ''m'' matrix [[in-place]], with [[Big O notation|O(1)]] additional storage or at most storage much less than ''mn''. For ''n'' β ''m'', this involves a complicated [[permutation]] of the data elements that is non-trivial to implement in-place. Therefore, efficient [[in-place matrix transposition]] has been the subject of numerous research publications in [[computer science]], starting in the late 1950s, and several algorithms have been developed.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)