Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Square matrix
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Matrix with the same number of rows and columns}} [[File:Arbitrary square matrix.gif|thumb|A square matrix of order 4. The entries <math>a_{ii}</math> form the [[main diagonal]] of a square matrix. For instance, the main diagonal of the 4×4 matrix above contains the elements {{math|1=''a''<sub>11</sub> = 9}}, {{math|1=''a''<sub>22</sub> = 11}}, {{math|1=''a''<sub>33</sub> = 4}}, {{math|1=''a''<sub>44</sub> = 10}}.]] In [[mathematics]], a '''square matrix''' is a [[Matrix (mathematics)|matrix]] with the same number of rows and columns. An ''n''-by-''n'' matrix is known as a square matrix of order {{nowrap|<math>n</math>.}} Any two square matrices of the same order can be added and multiplied. Square matrices are often used to represent simple [[linear transformation]]s, such as [[Shear mapping|shearing]] or [[Rotation (mathematics)|rotation]]. For example, if <math>R</math> is a square matrix representing a rotation ([[rotation matrix]]) and <math>\mathbf{v}</math> is a [[column vector]] describing the [[Position (vector)|position]] of a point in space, the product <math>R\mathbf{v}</math> yields another column vector describing the position of that point after that rotation. If <math>\mathbf{v}</math> is a [[row vector]], the same transformation can be obtained using {{nowrap|<math>\mathbf{v} R^{\mathsf T}</math>,}} where <math>R^{\mathsf T}</math> is the [[transpose]] of {{nowrap|<math>R</math>.}} ==Main diagonal== {{Main|Main diagonal}} The entries <math>a_{ii}</math> ({{math|1=''i'' = 1, ..., ''n''}}) form the [[main diagonal]] of a square matrix. They lie on the imaginary line which runs from the top left corner to the bottom right corner of the matrix. For instance, the main diagonal of the 4×4 matrix above contains the elements {{math|1=''a''<sub>11</sub> = 9}}, {{math|1=''a''<sub>22</sub> = 11}}, {{math|1=''a''<sub>33</sub> = 4}}, {{math|1=''a''<sub>44</sub> = 10}}. The diagonal of a square matrix from the top right to the bottom left corner is called ''antidiagonal'' or ''counterdiagonal''. == Special kinds == {| class="wikitable" style="float:right; margin:0ex 0ex 2ex 2ex;" |- ! Name !! Example with ''n'' = 3 |- | [[Diagonal matrix]] || style="text-align:center;" | <math> \begin{bmatrix} a_{11} & 0 & 0 \\ 0 & a_{22} & 0 \\ 0 & 0 & a_{33} \end{bmatrix} </math> |- | [[Lower triangular matrix]] || style="text-align:center;" | <math> \begin{bmatrix} a_{11} & 0 & 0 \\ a_{21} & a_{22} & 0 \\ a_{31} & a_{32} & a_{33} \end{bmatrix} </math> |- | [[Upper triangular matrix]] || style="text-align:center;" | <math> \begin{bmatrix} a_{11} & a_{12} & a_{13} \\ 0 & a_{22} & a_{23} \\ 0 & 0 & a_{33} \end{bmatrix} </math> |} === Diagonal or triangular matrix === If all entries outside the main diagonal are zero, <math>A</math> is called a [[diagonal matrix]]. If all entries below (resp. above) the main diagonal are zero, <math>A</math> is called an upper (resp. lower) [[triangular matrix]]. === Identity matrix === The [[identity matrix]] <math>I_n</math> of size <math>n</math> is the <math>n \times n</math> matrix in which all the elements on the [[main diagonal]] are equal to 1 and all other elements are equal to 0, e.g. <math display="block"> I_1 = \begin{bmatrix} 1 \end{bmatrix} ,\ I_2 = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} ,\ \ldots ,\ I_n = \begin{bmatrix} 1 & 0 & \cdots & 0 \\ 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 \end{bmatrix}. </math> It is a square matrix of order {{nowrap|<math>n</math>,}} and also a special kind of [[diagonal matrix]]. The term ''identity matrix'' refers to the property of matrix multiplication that <math display=block>I_m A = A I_n = A</math> for any <math>m \times n</math> matrix {{nowrap|<math>A</math>.}} ===Invertible matrix and its inverse=== A square matrix <math>A</math> is called ''[[invertible matrix|invertible]]'' or ''non-singular'' if there exists a matrix <math>B</math> such that<ref>{{Harvard citations |last1=Brown |year=1991 |nb=yes |loc=Definition I.2.28 }}</ref><ref>{{Harvard citations |last1=Brown |year=1991 |nb=yes |loc=Definition I.5.13 }}</ref> <math display="block">AB = BA = I_n.</math> If <math>B</math> exists, it is unique and is called the ''[[inverse matrix]]'' of {{nowrap|<math>A</math>,}} denoted {{nowrap|<math>A^{-1}</math>.}} ===Symmetric or skew-symmetric matrix=== A square matrix <math>A</math> that is equal to its transpose, i.e., {{nowrap|<math>A^{\mathsf T} = A</math>,}} is a [[symmetric matrix]]. If instead {{nowrap|<math>A^{\mathsf T} = -A</math>,}} then <math>A</math> is called a [[skew-symmetric matrix]]. For a complex square matrix {{nowrap|<math>A</math>,}} often the appropriate analogue of the transpose is the [[conjugate transpose]] {{nowrap|<math>A^*</math>,}} defined as the transpose of the [[complex conjugate]] of {{nowrap|<math>A</math>.}} A complex square matrix <math>A</math> satisfying <math>A^*=A</math> is called a [[Hermitian matrix]]. If instead {{nowrap|<math>A^* = -A</math>,}} then <math>A</math> is called a [[skew-Hermitian matrix]]. By the [[spectral theorem]], real symmetric (or complex Hermitian) matrices have an orthogonal (or unitary) [[eigenbasis]]; i.e., every vector is expressible as a [[linear combination]] of eigenvectors. In both cases, all eigenvalues are real.<ref>{{Harvard citations |last1=Horn |last2=Johnson |year=1985 |nb=yes |loc=Theorem 2.5.6 }}</ref> ===Definite matrix=== {| class="wikitable" style="float:right; text-align:center; margin:0ex 0ex 2ex 2ex;" |- ! [[Positive definite matrix|Positive definite]] !! [[Indefinite matrix|Indefinite]] |- | <math> \begin{bmatrix} 1/4 & 0 \\ 0 & 1 \\ \end{bmatrix} </math> | <math> \begin{bmatrix} 1/4 & 0 \\ 0 & -1/4 \end{bmatrix} </math> |- | {{math|1=''Q''(''x'',''y'') = 1/4 ''x''<sup>2</sup> + ''y''<sup>2</sup>}} | {{math|1=''Q''(''x'',''y'') = 1/4 ''x''<sup>2</sup> − 1/4 ''y''<sup>2</sup>}} |- | [[File:Ellipse in coordinate system with semi-axes labelled.svg|150px]] <br>Points such that {{math|1=''Q''(''x'', ''y'') = 1}} <br> ([[Ellipse]]). | [[File:Hyperbola2 SVG.svg|100x100px]] <br> Points such that {{math|1=''Q''(''x'', ''y'') = 1}} <br> ([[Hyperbola]]). |} A symmetric {{math|''n''×''n''}}-matrix is called ''[[positive-definite matrix|positive-definite]]'' (respectively negative-definite; indefinite), if for all nonzero vectors <math>x \in \mathbb{R}^n</math> the associated [[quadratic form]] given by <math display="block" id="quadratic_forms">Q(\mathbf{x}) = \mathbf{x}^\mathsf{T} A \mathbf{x}</math> takes only positive values (respectively only negative values; both some negative and some positive values).<ref>{{Harvard citations |last1=Horn |last2=Johnson |year=1985 |nb=yes |loc=Chapter 7 }}</ref> If the quadratic form takes only non-negative (respectively only non-positive) values, the symmetric matrix is called positive-semidefinite (respectively negative-semidefinite); hence the matrix is indefinite precisely when it is neither positive-semidefinite nor negative-semidefinite. A symmetric matrix is positive-definite if and only if all its eigenvalues are positive.<ref>{{Harvard citations |last1=Horn |last2=Johnson |year=1985 |nb=yes |loc=Theorem 7.2.1 }}</ref> The table at the right shows two possibilities for 2×2 matrices. Allowing as input two different vectors instead yields the [[bilinear form]] associated to {{mvar|A}}:<ref>{{Harvard citations |last1=Horn |last2=Johnson |year=1985 |nb=yes |loc=Example 4.0.6, p. 169 }}</ref> <math display="block">B_A(\mathbf{x}, \mathbf{y}) = \mathbf{x}^\mathsf{T} A \mathbf{y}.</math> === Orthogonal matrix === An ''[[orthogonal matrix]]'' is a [[Matrix (mathematics)#Square matrices|square matrix]] with [[real number|real]] entries whose columns and rows are [[orthogonal]] [[unit vector]]s (i.e., [[orthonormality|orthonormal]] vectors). Equivalently, a matrix ''A'' is orthogonal if its [[transpose]] is equal to its [[inverse matrix|inverse]]: <math display="block">A^\textsf{T} = A^{-1}, </math> which entails <math display="block">A^\textsf{T} A = A A^\textsf{T} = I, </math> where ''I'' is the [[identity matrix]]. An orthogonal matrix {{mvar|A}} is necessarily [[Invertible matrix|invertible]] (with inverse {{math|1=''A''<sup>−1</sup> = ''A''<sup>T</sup>}}), [[Unitary matrix|unitary]] ({{math|1=''A''<sup>−1</sup> = ''A''*}}), and [[Normal matrix|normal]] ({{math|1=''A''*''A'' = ''AA''*}}). The [[determinant]] of any orthogonal matrix is either +1 or −1. The [[special orthogonal group]] <math>\operatorname{SO}(n)</math> consists of the {{math|''n'' × ''n''}} orthogonal matrices with [[determinant]] +1. The [[complex number|complex]] analogue of an orthogonal matrix is a [[unitary matrix]]. ===Normal matrix=== A real or complex square matrix <math>A</math> is called ''[[normal matrix|normal]]'' if {{nowrap|<math>A^* A = AA^*</math>.}} If a real square matrix is symmetric, skew-symmetric, or orthogonal, then it is normal. If a complex square matrix is Hermitian, skew-Hermitian, or unitary, then it is normal. Normal matrices are of interest mainly because they include the types of matrices just listed and form the broadest class of matrices for which the [[spectral theorem]] holds.<ref>Artin, ''Algebra'', 2nd edition, Pearson, 2018, section 8.6.</ref> ==Operations== ===Trace=== The [[trace of a matrix|trace]], tr(''A'') of a square matrix ''A'' is the sum of its diagonal entries. While matrix multiplication is not commutative, the trace of the product of two matrices is independent of the order of the factors: <math display="block">\operatorname{tr}(AB) = \operatorname{tr}(BA).</math> This is immediate from the definition of matrix multiplication: <math display="block">\operatorname{tr}(AB) = \sum_{i=1}^m \sum_{j=1}^n A_{ij} B_{ji} = \operatorname{tr}(BA).</math> Also, the trace of a matrix is equal to that of its transpose, i.e., <math display="block">\operatorname{tr}(A) = \operatorname{tr}(A^{\mathrm T}).</math> ===Determinant=== {{Main|Determinant}} [[File:Determinant example.svg|thumb|300px|right|A linear transformation on <math>\mathbb{R}^2</math> given by the indicated matrix. The determinant of this matrix is −1, as the area of the green parallelogram at the right is 1, but the map reverses the [[orientation (mathematics)|orientation]], since it turns the counterclockwise orientation of the vectors to a clockwise one.]] The ''determinant'' <math>\det(A)</math> or <math>|A|</math> of a square matrix <math>A</math> is a number encoding certain properties of the matrix. A matrix is invertible [[if and only if]] its determinant is nonzero. Its [[absolute value]] equals the area (in <math>\mathbb{R}^2</math>) or volume (in <math>\mathbb{R}^3</math>) of the image of the unit square (or cube), while its sign corresponds to the orientation of the corresponding linear map: the determinant is positive if and only if the orientation is preserved. The determinant of 2×2 matrices is given by <math display="block">\det \begin{bmatrix} a & b \\ c & d \end{bmatrix} = ad - bc.</math> The determinant of 3×3 matrices involves 6 terms ([[rule of Sarrus]]). The more lengthy [[Leibniz formula for determinants|Leibniz formula]] generalizes these two formulae to all dimensions.<ref>{{Harvard citations |last1=Brown |year=1991 |nb=yes |loc=Definition III.2.1 }}</ref> The determinant of a product of square matrices equals the product of their determinants:<ref>{{Harvard citations |last1=Brown |year=1991 |nb=yes |loc=Theorem III.2.12 }}</ref> <math display="block">\det(AB) = \det(A) \cdot \det(B)</math> Adding a multiple of any row to another row, or a multiple of any column to another column, does not change the determinant. Interchanging two rows or two columns affects the determinant by multiplying it by −1.<ref>{{Harvard citations |last1=Brown |year=1991 |nb=yes |loc=Corollary III.2.16 }}</ref> Using these operations, any matrix can be transformed to a lower (or upper) triangular matrix, and for such matrices the determinant equals the product of the entries on the main diagonal; this provides a method to calculate the determinant of any matrix. Finally, the [[Laplace expansion]] expresses the determinant in terms of [[minor (linear algebra)|minors]], i.e., determinants of smaller matrices.<ref>{{Harvard citations |last1=Mirsky |year=1990 |nb=yes |loc=Theorem 1.4.1 }}</ref> This expansion can be used for a recursive definition of determinants (taking as starting case the determinant of a 1×1 matrix, which is its unique entry, or even the determinant of a 0×0 matrix, which is 1), that can be seen to be equivalent to the Leibniz formula. Determinants can be used to solve [[linear system]]s using [[Cramer's rule]], where the division of the determinants of two related square matrices equates to the value of each of the system's variables.<ref>{{Harvard citations |last1=Brown |year=1991 |nb=yes |loc=Theorem III.3.18 }}</ref> ===Eigenvalues and eigenvectors=== {{Main|Eigenvalue, eigenvector and eigenspace|l1=Eigenvalues and eigenvectors}} A number {{mvar|λ}} and a non-zero vector <math>\mathbf{v}</math> satisfying <math display="block">A \mathbf{v} = \lambda \mathbf{v}</math> are called an ''eigenvalue'' and an ''eigenvector'' of {{nowrap|<math>A</math>,}} respectively.<ref>''Eigen'' means "own" in [[German language|German]] and in [[Dutch language|Dutch]].</ref><ref>{{Harvard citations |last1=Brown |year=1991 |nb=yes |loc=Definition III.4.1 }}</ref> The number {{mvar|λ}} is an eigenvalue of an {{math|''n''×''n''}}-matrix {{mvar|A}} if and only if {{math|''A'' − λ''I''<sub>''n''</sub>}} is not invertible, which is [[logical equivalence|equivalent]] to<ref>{{Harvard citations |last1=Brown |year=1991 |nb=yes |loc=Definition III.4.9 }}</ref> <math display="block">\det(A-\lambda I) = 0.</math> The polynomial {{math|''p''<sub>''A''</sub>}} in an [[indeterminate (variable)|indeterminate]] {{math|''X''}} given by evaluation of the determinant {{math|det(''XI''<sub>''n''</sub> − ''A'')}} is called the [[characteristic polynomial]] of {{mvar|A}}. It is a [[monic polynomial]] of [[degree of a polynomial|degree]] ''n''. Therefore the polynomial equation {{math|1=''p''<sub>''A''</sub>(λ) = 0}} has at most ''n'' different solutions, i.e., eigenvalues of the matrix.<ref>{{Harvard citations |last1=Brown |year=1991 |nb=yes |loc=Corollary III.4.10 }}</ref> They may be complex even if the entries of {{mvar|A}} are real. According to the [[Cayley–Hamilton theorem]], {{math|1=''p''<sub>''A''</sub>(''A'') = 0}}, that is, the result of substituting the matrix itself into its own characteristic polynomial yields the [[zero matrix]]. ==See also== *[[Cartan matrix]] *[[Cayley-Hamilton theorem]] ==Notes== {{Reflist|colwidth=30em}} ==References== * {{Citation |last1=Brown |first1=William C. |title=Matrices and vector spaces |publisher=[[Marcel Dekker]] |location=New York, NY |isbn=978-0-8247-8419-5 |year=1991 |url-access=registration |url=https://archive.org/details/matricesvectorsp0000brow }} * {{Citation |last1=Horn |first1=Roger A. |author1-link=Roger Horn |last2=Johnson |first2=Charles R. |author2-link=Charles Royal Johnson |title=Matrix Analysis |publisher=Cambridge University Press |isbn=978-0-521-38632-6 |year=1985 }} * {{Citation |last1=Mirsky |first1=Leonid |author-link=Leon Mirsky |title=An Introduction to Linear Algebra |url=https://books.google.com/books?id=ULMmheb26ZcC&dq=linear+algebra+determinant&pg=PA1 |publisher=Courier Dover Publications |isbn=978-0-486-66434-7 |year=1990 }} ==External links== *{{Commonscat-inline}} {{linear algebra}} [[Category:Matrices (mathematics)|*]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Citation
(
edit
)
Template:Commonscat-inline
(
edit
)
Template:Harvard citations
(
edit
)
Template:Linear algebra
(
edit
)
Template:Main
(
edit
)
Template:Math
(
edit
)
Template:Mvar
(
edit
)
Template:Navbox
(
edit
)
Template:Nowrap
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)