Square matrix

Revision as of 21:01, 14 April 2025 by imported>JJMC89 bot III (Moving Category:Matrices to Category:Matrices (mathematics) per Wikipedia:Categories for discussion/Speedy)
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Template:Short description

File:Arbitrary square matrix.gif
A square matrix of order 4. The entries <math>a_{ii}</math> form the main diagonal of a square matrix. For instance, the main diagonal of the 4×4 matrix above contains the elements Template:Math, Template:Math, Template:Math, Template:Math.

In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order Template:Nowrap Any two square matrices of the same order can be added and multiplied.

Square matrices are often used to represent simple linear transformations, such as shearing or rotation. For example, if <math>R</math> is a square matrix representing a rotation (rotation matrix) and <math>\mathbf{v}</math> is a column vector describing the position of a point in space, the product <math>R\mathbf{v}</math> yields another column vector describing the position of that point after that rotation. If <math>\mathbf{v}</math> is a row vector, the same transformation can be obtained using Template:Nowrap where <math>R^{\mathsf T}</math> is the transpose of Template:Nowrap

Main diagonalEdit

{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}} The entries <math>a_{ii}</math> (Template:Math) form the main diagonal of a square matrix. They lie on the imaginary line which runs from the top left corner to the bottom right corner of the matrix. For instance, the main diagonal of the 4×4 matrix above contains the elements Template:Math, Template:Math, Template:Math, Template:Math.

The diagonal of a square matrix from the top right to the bottom left corner is called antidiagonal or counterdiagonal.

Special kindsEdit

Name Example with n = 3
Diagonal matrix <math>
     \begin{bmatrix}
          a_{11} & 0      & 0 \\
          0      & a_{22} & 0 \\
          0      & 0      & a_{33}
     \end{bmatrix}
 </math>
Lower triangular matrix <math>
     \begin{bmatrix}
          a_{11} & 0      & 0 \\
          a_{21} & a_{22} & 0 \\
          a_{31} & a_{32} & a_{33}
     \end{bmatrix}
 </math>
Upper triangular matrix <math>
     \begin{bmatrix}
          a_{11} & a_{12} & a_{13} \\
          0      & a_{22} & a_{23} \\
          0      & 0      & a_{33}
     \end{bmatrix}
 </math>

Diagonal or triangular matrixEdit

If all entries outside the main diagonal are zero, <math>A</math> is called a diagonal matrix. If all entries below (resp. above) the main diagonal are zero, <math>A</math> is called an upper (resp. lower) triangular matrix.

Identity matrixEdit

The identity matrix <math>I_n</math> of size <math>n</math> is the <math>n \times n</math> matrix in which all the elements on the main diagonal are equal to 1 and all other elements are equal to 0, e.g. <math display="block"> I_1 = \begin{bmatrix} 1 \end{bmatrix} ,\ I_2 = \begin{bmatrix}

        1 & 0 \\
        0 & 1 
     \end{bmatrix}

,\ \ldots ,\ I_n = \begin{bmatrix}

        1 & 0 & \cdots & 0 \\
        0 & 1 & \cdots & 0 \\
        \vdots & \vdots & \ddots & \vdots \\
        0 & 0 & \cdots & 1
     \end{bmatrix}.

</math> It is a square matrix of order Template:Nowrap and also a special kind of diagonal matrix. The term identity matrix refers to the property of matrix multiplication that <math display=block>I_m A = A I_n = A</math> for any <math>m \times n</math> matrix Template:Nowrap

Invertible matrix and its inverseEdit

A square matrix <math>A</math> is called invertible or non-singular if there exists a matrix <math>B</math> such that<ref>Template:Harvard citations</ref><ref>Template:Harvard citations</ref> <math display="block">AB = BA = I_n.</math> If <math>B</math> exists, it is unique and is called the inverse matrix of Template:Nowrap denoted Template:Nowrap

Symmetric or skew-symmetric matrixEdit

A square matrix <math>A</math> that is equal to its transpose, i.e., Template:Nowrap is a symmetric matrix. If instead Template:Nowrap then <math>A</math> is called a skew-symmetric matrix.

For a complex square matrix Template:Nowrap often the appropriate analogue of the transpose is the conjugate transpose Template:Nowrap defined as the transpose of the complex conjugate of Template:Nowrap A complex square matrix <math>A</math> satisfying <math>A^*=A</math> is called a Hermitian matrix. If instead Template:Nowrap then <math>A</math> is called a skew-Hermitian matrix.

By the spectral theorem, real symmetric (or complex Hermitian) matrices have an orthogonal (or unitary) eigenbasis; i.e., every vector is expressible as a linear combination of eigenvectors. In both cases, all eigenvalues are real.<ref>Template:Harvard citations</ref>

Definite matrixEdit

Positive definite Indefinite
<math> \begin{bmatrix}
        1/4 & 0 \\
        0 & 1 \\
    \end{bmatrix} </math>
<math> \begin{bmatrix}
        1/4 & 0 \\
        0 & -1/4 
    \end{bmatrix} </math>
Template:Math Template:Math
File:Ellipse in coordinate system with semi-axes labelled.svg
Points such that Template:Math
(Ellipse).
File:Hyperbola2 SVG.svg
Points such that Template:Math
(Hyperbola).

A symmetric Template:Math-matrix is called positive-definite (respectively negative-definite; indefinite), if for all nonzero vectors <math>x \in \mathbb{R}^n</math> the associated quadratic form given by <math display="block" id="quadratic_forms">Q(\mathbf{x}) = \mathbf{x}^\mathsf{T} A \mathbf{x}</math> takes only positive values (respectively only negative values; both some negative and some positive values).<ref>Template:Harvard citations</ref> If the quadratic form takes only non-negative (respectively only non-positive) values, the symmetric matrix is called positive-semidefinite (respectively negative-semidefinite); hence the matrix is indefinite precisely when it is neither positive-semidefinite nor negative-semidefinite.

A symmetric matrix is positive-definite if and only if all its eigenvalues are positive.<ref>Template:Harvard citations</ref> The table at the right shows two possibilities for 2×2 matrices.

Allowing as input two different vectors instead yields the bilinear form associated to Template:Mvar:<ref>Template:Harvard citations</ref> <math display="block">B_A(\mathbf{x}, \mathbf{y}) = \mathbf{x}^\mathsf{T} A \mathbf{y}.</math>

Orthogonal matrixEdit

An orthogonal matrix is a square matrix with real entries whose columns and rows are orthogonal unit vectors (i.e., orthonormal vectors). Equivalently, a matrix A is orthogonal if its transpose is equal to its inverse: <math display="block">A^\textsf{T} = A^{-1}, </math> which entails <math display="block">A^\textsf{T} A = A A^\textsf{T} = I, </math> where I is the identity matrix.

An orthogonal matrix Template:Mvar is necessarily invertible (with inverse Template:Math), unitary (Template:Math), and normal (Template:Math). The determinant of any orthogonal matrix is either +1 or −1. The special orthogonal group <math>\operatorname{SO}(n)</math> consists of the Template:Math orthogonal matrices with determinant +1.

The complex analogue of an orthogonal matrix is a unitary matrix.

Normal matrixEdit

A real or complex square matrix <math>A</math> is called normal if Template:Nowrap If a real square matrix is symmetric, skew-symmetric, or orthogonal, then it is normal. If a complex square matrix is Hermitian, skew-Hermitian, or unitary, then it is normal. Normal matrices are of interest mainly because they include the types of matrices just listed and form the broadest class of matrices for which the spectral theorem holds.<ref>Artin, Algebra, 2nd edition, Pearson, 2018, section 8.6.</ref>

OperationsEdit

TraceEdit

The trace, tr(A) of a square matrix A is the sum of its diagonal entries. While matrix multiplication is not commutative, the trace of the product of two matrices is independent of the order of the factors: <math display="block">\operatorname{tr}(AB) = \operatorname{tr}(BA).</math> This is immediate from the definition of matrix multiplication: <math display="block">\operatorname{tr}(AB) = \sum_{i=1}^m \sum_{j=1}^n A_{ij} B_{ji} = \operatorname{tr}(BA).</math> Also, the trace of a matrix is equal to that of its transpose, i.e., <math display="block">\operatorname{tr}(A) = \operatorname{tr}(A^{\mathrm T}).</math>

DeterminantEdit

{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}}

File:Determinant example.svg
A linear transformation on <math>\mathbb{R}^2</math> given by the indicated matrix. The determinant of this matrix is −1, as the area of the green parallelogram at the right is 1, but the map reverses the orientation, since it turns the counterclockwise orientation of the vectors to a clockwise one.

The determinant <math>\det(A)</math> or <math>|A|</math> of a square matrix <math>A</math> is a number encoding certain properties of the matrix. A matrix is invertible if and only if its determinant is nonzero. Its absolute value equals the area (in <math>\mathbb{R}^2</math>) or volume (in <math>\mathbb{R}^3</math>) of the image of the unit square (or cube), while its sign corresponds to the orientation of the corresponding linear map: the determinant is positive if and only if the orientation is preserved.

The determinant of 2×2 matrices is given by <math display="block">\det \begin{bmatrix} a & b \\ c & d \end{bmatrix} = ad - bc.</math> The determinant of 3×3 matrices involves 6 terms (rule of Sarrus). The more lengthy Leibniz formula generalizes these two formulae to all dimensions.<ref>Template:Harvard citations</ref>

The determinant of a product of square matrices equals the product of their determinants:<ref>Template:Harvard citations</ref> <math display="block">\det(AB) = \det(A) \cdot \det(B)</math> Adding a multiple of any row to another row, or a multiple of any column to another column, does not change the determinant. Interchanging two rows or two columns affects the determinant by multiplying it by −1.<ref>Template:Harvard citations</ref> Using these operations, any matrix can be transformed to a lower (or upper) triangular matrix, and for such matrices the determinant equals the product of the entries on the main diagonal; this provides a method to calculate the determinant of any matrix. Finally, the Laplace expansion expresses the determinant in terms of minors, i.e., determinants of smaller matrices.<ref>Template:Harvard citations</ref> This expansion can be used for a recursive definition of determinants (taking as starting case the determinant of a 1×1 matrix, which is its unique entry, or even the determinant of a 0×0 matrix, which is 1), that can be seen to be equivalent to the Leibniz formula. Determinants can be used to solve linear systems using Cramer's rule, where the division of the determinants of two related square matrices equates to the value of each of the system's variables.<ref>Template:Harvard citations</ref>

Eigenvalues and eigenvectorsEdit

{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}} A number Template:Mvar and a non-zero vector <math>\mathbf{v}</math> satisfying <math display="block">A \mathbf{v} = \lambda \mathbf{v}</math> are called an eigenvalue and an eigenvector of Template:Nowrap respectively.<ref>Eigen means "own" in German and in Dutch.</ref><ref>Template:Harvard citations</ref> The number Template:Mvar is an eigenvalue of an Template:Math-matrix Template:Mvar if and only if Template:Math is not invertible, which is equivalent to<ref>Template:Harvard citations</ref> <math display="block">\det(A-\lambda I) = 0.</math> The polynomial Template:Math in an indeterminate Template:Math given by evaluation of the determinant Template:Math is called the characteristic polynomial of Template:Mvar. It is a monic polynomial of degree n. Therefore the polynomial equation Template:Math has at most n different solutions, i.e., eigenvalues of the matrix.<ref>Template:Harvard citations</ref> They may be complex even if the entries of Template:Mvar are real. According to the Cayley–Hamilton theorem, Template:Math, that is, the result of substituting the matrix itself into its own characteristic polynomial yields the zero matrix.

See alsoEdit

NotesEdit

Template:Reflist

ReferencesEdit

External linksEdit

{{#invoke:Navbox|navbox}}