Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Determinant
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Properties== ===Characterization of the determinant=== The determinant can be characterized by the following three key properties. To state these, it is convenient to regard an <math>n \times n</math> matrix ''A'' as being composed of its <math>n</math> columns, so denoted as :<math>A = \big ( a_1, \dots, a_n \big ),</math> where the [[column vector]] <math>a_i</math> (for each ''i'') is composed of the entries of the matrix in the ''i''-th column. # <li value="A"> <math>\det\left(I\right) = 1</math>, where <math>I</math> is an [[identity matrix]]. # <li value="B"> The determinant is ''[[multilinear map|multilinear]]'': if the ''j''th column of a matrix <math>A</math> is written as a [[linear combination]] <math>a_j = r \cdot v + w</math> of two [[column vector]]s ''v'' and ''w'' and a number ''r'', then the determinant of ''A'' is expressible as a similar linear combination: #: <math>\begin{align}|A| &= \big | a_1, \dots, a_{j-1}, r \cdot v + w, a_{j+1}, \dots, a_n | \\ &= r \cdot | a_1, \dots, v, \dots a_n | + | a_1, \dots, w, \dots, a_n | \end{align}</math> # <li value="C">The determinant is ''[[alternating form|alternating]]'': whenever two columns of a matrix are identical, its determinant is 0: #: <math>| a_1, \dots, v, \dots, v, \dots, a_n| = 0.</math> If the determinant is defined using the Leibniz formula as above, these three properties can be proved by direct inspection of that formula. Some authors also approach the determinant directly using these three properties: it can be shown that there is exactly one function that assigns to any <math>n \times n</math> matrix ''A'' a number that satisfies these three properties.<ref>[[Serge Lang]], ''Linear Algebra'', 2nd Edition, Addison-Wesley, 1971, pp 173, 191.</ref> This also shows that this more abstract approach to the determinant yields the same definition as the one using the Leibniz formula. To see this it suffices to expand the determinant by multi-linearity in the columns into a (huge) linear combination of determinants of matrices in which each column is a [[standard basis]] vector. These determinants are either 0 (by property 9) or else ±1 (by properties 1 and 12 below), so the linear combination gives the expression above in terms of the Levi-Civita symbol. While less technical in appearance, this characterization cannot entirely replace the Leibniz formula in defining the determinant, since without it the existence of an appropriate function is not clear.{{citation needed|date=May 2021}} ===Immediate consequences=== These rules have several further consequences: * The determinant is a [[homogeneous function]], i.e., <math display="block">\det(cA) = c^n\det(A)</math> (for an <math>n \times n</math> matrix <math>A</math>). * Interchanging any pair of columns of a matrix multiplies its determinant by −1. This follows from the determinant being multilinear and alternating (properties 2 and 3 above): <math display="block">|a_1, \dots, a_j, \dots a_i, \dots, a_n| = - |a_1, \dots, a_i, \dots, a_j, \dots, a_n|.</math> This formula can be applied iteratively when several columns are swapped. For example <math display="block">|a_3, a_1, a_2, a_4 \dots, a_n| = - |a_1, a_3, a_2, a_4, \dots, a_n| = |a_1, a_2, a_3, a_4, \dots, a_n|.</math> Yet more generally, any permutation of the columns multiplies the determinant by the [[parity of a permutation|sign]] of the permutation. * If some column can be expressed as a linear combination of the ''other'' columns (i.e. the columns of the matrix form a [[Linearly independent|linearly dependent]] set), the determinant is 0. As a special case, this includes: if some column is such that all its entries are zero, then the determinant of that matrix is 0. * Adding a scalar multiple of one column to ''another'' column does not change the value of the determinant. This is a consequence of multilinearity and being alternative: by multilinearity the determinant changes by a multiple of the determinant of a matrix with two equal columns, which determinant is 0, since the determinant is alternating. * If <math>A</math> is a [[triangular matrix]], i.e. <math>a_{ij}=0</math>, whenever <math>i>j</math> or, alternatively, whenever <math>i<j</math>, then its determinant equals the product of the diagonal entries: <math display="block">\det(A) = a_{11} a_{22} \cdots a_{nn} = \prod_{i=1}^n a_{ii}.</math> Indeed, such a matrix can be reduced, by appropriately adding multiples of the columns with fewer nonzero entries to those with more entries, to a [[diagonal matrix]] (without changing the determinant). For such a matrix, using the linearity in each column reduces to the identity matrix, in which case the stated formula holds by the very first characterizing property of determinants. Alternatively, this formula can also be deduced from the Leibniz formula, since the only permutation <math>\sigma</math> which gives a non-zero contribution is the identity permutation. ====Example==== These characterizing properties and their consequences listed above are both theoretically significant, but can also be used to compute determinants for concrete matrices. In fact, [[Gaussian elimination]] can be applied to bring any matrix into upper triangular form, and the steps in this algorithm affect the determinant in a controlled way. The following concrete example illustrates the computation of the determinant of the matrix <math>A</math> using that method: :<math>A = \begin{bmatrix} -2 & -1 & 2 \\ 2 & 1 & 4 \\ -3 & 3 & -1 \end{bmatrix}. </math> {| class="wikitable" |+ Computation of the determinant of matrix <math>A</math> |- | Matrix || <math>B = \begin{bmatrix} -3 & -1 & 2 \\ 3 & 1 & 4 \\ 0 & 3 & -1 \end{bmatrix} </math> || <math>C = \begin{bmatrix} -3 & 5 & 2 \\ 3 & 13 & 4 \\ 0 & 0 & -1 \end{bmatrix} </math> || <math>D = \begin{bmatrix} 5 & -3 & 2 \\ 13 & 3 & 4 \\ 0 & 0 & -1 \end{bmatrix} </math> || <math>E = \begin{bmatrix} 18 & -3 & 2 \\ 0 & 3 & 4 \\ 0 & 0 & -1 \end{bmatrix} </math> |- | Obtained by || add the second column to the first || add 3 times the third column to the second || swap the first two columns || add <math>-\frac{13} 3</math> times the second column to the first |- | Determinant || <math>|A| = |B|</math> || <math>|B| = |C|</math> || <math>|D| = -|C|</math> || <math>|E| = |D|</math> |} Combining these equalities gives <math>|A| = -|E| = -(18 \cdot 3 \cdot (-1)) = 54.</math> ===Transpose=== The determinant of the [[transpose]] of <math>A</math> equals the determinant of ''A'': :<math>\det\left(A^\textsf{T}\right) = \det(A)</math>. This can be proven by inspecting the Leibniz formula.<ref>{{harvnb|Lang|1987|loc=§VI.7, Theorem 7.5}}</ref> This implies that in all the properties mentioned above, the word "column" can be replaced by "row" throughout. For example, viewing an {{math|''n'' × ''n''}} matrix as being composed of ''n'' rows, the determinant is an ''n''-linear function. === Multiplicativity and matrix groups === The determinant is a ''multiplicative map'', i.e., for square matrices <math>A</math> and <math>B</math> of equal size, the determinant of a [[matrix product]] equals the product of their determinants: :<math>\det(AB) = \det (A) \det (B)</math> This key fact can be proven by observing that, for a fixed matrix <math>B</math>, both sides of the equation are alternating and multilinear as a function depending on the columns of <math>A</math>. Moreover, they both take the value <math>\det B</math> when <math>A</math> is the identity matrix. The above-mentioned unique characterization of alternating multilinear maps therefore shows this claim.<ref>Alternatively, {{harvnb|Bourbaki|1998|loc=§III.8, Proposition 1}} proves this result using the [[functoriality]] of the exterior power.</ref> A matrix <math>A</math> with entries in a [[field (mathematics)|field]] is [[invertible matrix|invertible]] precisely if its determinant is nonzero. This follows from the multiplicativity of the determinant and the formula for the inverse involving the adjugate matrix mentioned below. In this event, the determinant of the inverse matrix is given by :<math>\det\left(A^{-1}\right) = \frac{1}{\det(A)} = [\det(A)]^{-1}</math>. In particular, products and inverses of matrices with non-zero determinant (respectively, determinant one) still have this property. Thus, the set of such matrices (of fixed size <math>n</math> over a field <math>K</math>) forms a group known as the [[general linear group]] <math>\operatorname{GL}_n(K)</math> (respectively, a [[subgroup]] called the [[special linear group]] <math>\operatorname{SL}_n(K) \subset \operatorname{GL}_n(K)</math>. More generally, the word "special" indicates the subgroup of another [[matrix group]] of matrices of determinant one. Examples include the [[special orthogonal group]] (which if ''n'' is 2 or 3 consists of all [[rotation matrix|rotation matrices]]), and the [[special unitary group]]. Because the determinant respects multiplication and inverses, it is in fact a [[group homomorphism]] from <math>\operatorname{GL}_n(K)</math> into the multiplicative group <math>K^\times</math> of nonzero elements of <math>K</math>. This homomorphism is surjective and its kernel is <math>\operatorname{SL}_n(K)</math> (the matrices with determinant one). Hence, by the [[first isomorphism theorem]], this shows that <math>\operatorname{SL}_n(K)</math> is a [[normal subgroup]] of <math>\operatorname{GL}_n(K)</math>, and that the [[quotient group]] <math>\operatorname{GL}_n(K)/\operatorname{SL}_n(K)</math> is isomorphic to <math>K^\times</math>. The [[Cauchy–Binet formula]] is a generalization of that product formula for ''rectangular'' matrices. This formula can also be recast as a multiplicative formula for [[compound matrix|compound matrices]] whose entries are the determinants of all quadratic submatrices of a given matrix.<ref>{{harvnb|Horn|Johnson|2018|loc=§0.8.7}}</ref><ref>{{harvnb|Kung|Rota|Yan|2009|p=306}}</ref> === Laplace expansion === [[Laplace expansion]] expresses the determinant of a matrix <math>A</math> [[Recursion|recursively]] in terms of determinants of smaller matrices, known as its [[minor (matrix)|minors]]. The minor <math>M_{i,j}</math> is defined to be the determinant of the <math>(n-1) \times (n-1)</math> matrix that results from <math>A</math> by removing the <math>i</math>-th row and the <math>j</math>-th column. The expression <math>(-1)^{i+j}M_{i,j}</math> is known as a [[cofactor (linear algebra)|cofactor]]. For every <math>i</math>, one has the equality :<math>\det(A) = \sum_{j=1}^n (-1)^{i+j} a_{i,j} M_{i,j},</math> which is called the ''Laplace expansion along the {{mvar|i}}th row''. For example, the Laplace expansion along the first row (<math>i=1</math>) gives the following formula: :<math> \begin{vmatrix}a&b&c\\ d&e&f\\ g&h&i\end{vmatrix} = a\begin{vmatrix}e&f\\ h&i\end{vmatrix} - b\begin{vmatrix}d&f\\ g&i\end{vmatrix} + c\begin{vmatrix}d&e\\ g&h\end{vmatrix} </math> Unwinding the determinants of these <math>2 \times 2</math>-matrices gives back the Leibniz formula mentioned above. Similarly, the ''Laplace expansion along the <math>j</math>-th column'' is the equality :<math>\det(A)= \sum_{i=1}^n (-1)^{i+j} a_{i,j} M_{i,j}.</math> Laplace expansion can be used iteratively for computing determinants, but this approach is inefficient for large matrices. However, it is useful for computing the determinants of highly symmetric matrix such as the [[Vandermonde matrix]] <math display="block">\begin{vmatrix} 1 & 1 & 1 & \cdots & 1 \\ x_1 & x_2 & x_3 & \cdots & x_n \\ x_1^2 & x_2^2 & x_3^2 & \cdots & x_n^2 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ x_1^{n-1} & x_2^{n-1} & x_3^{n-1} & \cdots & x_n^{n-1} \end{vmatrix} = \prod_{1 \leq i < j \leq n} \left(x_j - x_i\right). </math>The ''n''-term Laplace expansion along a row or column can be [[Laplace expansion#Laplace expansion of a determinant by complementary minors|generalized]] to write an ''n'' x ''n'' determinant as a sum of <math>\tbinom nk</math> [[Binomial coefficient|terms]], each the product of the determinant of a ''k'' x ''k'' [[Minor (linear algebra)|submatrix]] and the determinant of the complementary (''n−k'') x (''n−k'') submatrix. ===Adjugate matrix=== The [[adjugate matrix]] <math>\operatorname{adj}(A)</math> is the transpose of the matrix of the cofactors, that is, : <math>(\operatorname{adj}(A))_{i,j} = (-1)^{i+j} M_{ji}.</math> For every matrix, one has<ref>{{harvnb|Horn|Johnson|2018|loc=§0.8.2}}.</ref> : <math>(\det A) I = A\operatorname{adj}A = (\operatorname{adj}A)\,A. </math> Thus the adjugate matrix can be used for expressing the inverse of a [[nonsingular matrix]]: : <math>A^{-1} = \frac 1{\det A}\operatorname{adj}A. </math> === Block matrices === The formula for the determinant of a <math>2 \times 2</math> matrix above continues to hold, under appropriate further assumptions, for a [[block matrix]], i.e., a matrix composed of four submatrices <math>A, B, C, D</math> of dimension <math>m \times m</math>, <math>m \times n</math>, <math>n \times m</math> and <math>n \times n</math>, respectively. The easiest such formula, which can be proven using either the Leibniz formula or a factorization involving the [[Schur complement]], is :<math>\det\begin{pmatrix}A& 0\\ C& D\end{pmatrix} = \det(A) \det(D) = \det\begin{pmatrix}A& B\\ 0& D\end{pmatrix}.</math> If <math>A</math> is [[Invertible matrix|invertible]], then it follows with results from the section on multiplicativity that :<math>\begin{align} \det\begin{pmatrix}A& B\\ C& D\end{pmatrix} & = \det(A)\det\begin{pmatrix}A& B\\ C& D\end{pmatrix} \underbrace{\det\begin{pmatrix}A^{-1}& -A^{-1} B\\ 0& I_n\end{pmatrix}}_{=\,\det(A^{-1})\,=\,(\det A)^{-1}}\\ & = \det(A) \det\begin{pmatrix}I_m& 0\\ C A^{-1}& D-C A^{-1} B\end{pmatrix}\\ & = \det(A) \det(D - C A^{-1} B), \end{align}</math> which simplifies to <math>\det (A) (D - C A^{-1} B)</math> when <math>D</math> is a <math>1 \times 1</math> matrix. A similar result holds when <math>D</math> is invertible, namely :<math>\begin{align} \det\begin{pmatrix}A& B\\ C& D\end{pmatrix} & = \det(D)\det\begin{pmatrix}A& B\\ C& D\end{pmatrix} \underbrace{\det\begin{pmatrix}I_m& 0\\ -D^{-1} C& D^{-1}\end{pmatrix}}_{=\,\det(D^{-1})\,=\,(\det D)^{-1}}\\ & = \det(D) \det\begin{pmatrix}A - B D^{-1} C& B D^{-1}\\ 0& I_n\end{pmatrix}\\ & = \det(D) \det(A - B D^{-1} C). \end{align}</math> Both results can be combined to derive [[Sylvester's determinant theorem]], which is also stated below. If the blocks are square matrices of the ''same'' size further formulas hold. For example, if <math>C</math> and <math>D</math> [[commutativity|commute]] (i.e., <math>CD=DC</math>), then<ref>{{Cite journal|first=J. R.|last= Silvester|title= Determinants of Block Matrices|journal= Math. Gaz.|volume=84 |issue= 501|year=2000 | pages= 460–467| jstor=3620776|url= https://hal.archives-ouvertes.fr/hal-01509379/document|doi= 10.2307/3620776|s2cid= 41879675}}</ref> :<math>\det\begin{pmatrix}A& B\\ C& D\end{pmatrix} = \det(AD - BC).</math> This formula has been generalized to matrices composed of more than <math>2 \times 2</math> blocks, again under appropriate commutativity conditions among the individual blocks.<ref>{{cite journal|last1=Sothanaphan|first1=Nat|title=Determinants of block matrices with noncommuting blocks|journal=Linear Algebra and Its Applications|date=January 2017|volume=512| pages=202–218| doi=10.1016/j.laa.2016.10.004|arxiv=1805.06027|s2cid=119272194}}</ref> For <math>A = D</math> and <math>B = C</math>, the following formula holds (even if <math>A</math> and <math>B</math> do not commute). :<math>\det\begin{pmatrix}A & B\\ B & A\end{pmatrix} = \det\begin{pmatrix}A+B & B\\ B+A & A\end{pmatrix} = \det\begin{pmatrix}A+B & B\\ 0 & A-B\end{pmatrix} = \det(A+B) \det(A-B).</math> === Sylvester's determinant theorem === [[Sylvester's determinant theorem]] states that for ''A'', an {{math|''m'' × ''n''}} matrix, and ''B'', an {{math|''n'' × ''m''}} matrix (so that ''A'' and ''B'' have dimensions allowing them to be multiplied in either order forming a square matrix): :<math>\det\left(I_\mathit{m} + AB\right) = \det\left(I_\mathit{n} + BA\right),</math> where ''I''<sub>''m''</sub> and ''I''<sub>''n''</sub> are the {{math|''m'' × ''m''}} and {{math|''n'' × ''n''}} identity matrices, respectively. From this general result several consequences follow. {{ordered list | list-style-type=lower-alpha | For the case of column vector ''c'' and row vector ''r'', each with ''m'' components, the formula allows quick calculation of the determinant of a matrix that differs from the identity matrix by a matrix of rank 1: :<math>\det\left(I_\mathit{m} + cr\right) = 1 + rc.</math> | More generally,<ref>Proofs can be found in http://www.ee.ic.ac.uk/hp/staff/dmb/matrix/proof003.html</ref> for any invertible {{math|''m'' × ''m''}} matrix ''X'', :<math>\det(X + AB) = \det(X) \det\left(I_\mathit{n} + BX^{-1}A\right),</math> | For a column and row vector as above: : <math>\det(X + cr) = \det(X) \det\left(1 + rX^{-1}c\right) = \det(X) + r\,\operatorname{adj}(X)\,c.</math> | For square matrices <math>A</math> and <math>B</math> of the same size, the matrices <math>AB</math> and <math>BA</math> have the same characteristic polynomials (hence the same eigenvalues). }} A generalization is <math>\det\left(Z + AWB\right) = \det\left( Z\right) \det\left(W \right) \det\left(W^{-1} + B Z^{-1} A\right)</math>(see [[Matrix determinant lemma]]), where ''Z'' is an {{math|''m'' × ''m''}} invertible matrix and ''W'' is an {{math|''n'' × ''n''}} invertible matrix. ===Sum=== The determinant of the sum <math>A+B</math> of two square matrices of the same size is not in general expressible in terms of the determinants of ''A'' and of ''B''. However, for [[positive-definite matrix|positive semidefinite matrices]] <math>A</math>, <math>B</math> and <math>C</math> of equal size, <math display=block>\det(A + B + C) + \det(C) \geq \det(A + C) + \det(B + C)\text{,}</math> with the corollary<ref>{{cite arXiv| last1=Lin| first1=Minghua| last2=Sra|first2=Suvrit|title=Completely strong superadditivity of generalized matrix functions|eprint=1410.1958| class=math.FA| year=2014}}</ref><ref>{{cite journal|last1=Paksoy|last2=Turkmen|last3=Zhang|title=Inequalities of Generalized Matrix Functions via Tensor Products|journal=Electronic Journal of Linear Algebra|year=2014|volume=27|pages= 332–341| doi=10.13001/1081-3810.1622|url=https://nsuworks.nova.edu/cgi/viewcontent.cgi?article=1062&context=math_facarticles|doi-access=free}}</ref> <math display=block>\det(A + B) \geq \det(A) + \det(B)\text{.}</math> [[Brunn–Minkowski theorem]] implies that the {{mvar|n}}th root of determinant is a [[concave function]], when restricted to [[Hermitian matrix|Hermitian]] positive-definite <math>n\times n</math> matrices.<ref>{{cite web|url=https://mathoverflow.net/questions/42594/concavity-of-det1-n-over-hpd-n|title=Concavity of det<sup><sup>1</sup>/<sub>''n''</sub></sup> over HPD<sub>''n''</sub>.|date=Oct 18, 2010|last1=Serre|first1=Denis|website=MathOverflow}}</ref> Therefore, if {{mvar|A}} and {{mvar|B}} are Hermitian positive-definite <math>n\times n</math> matrices, one has <math display=block>\sqrt[n]{\det(A+B)}\geq\sqrt[n]{\det(A)}+\sqrt[n]{\det(B)},</math> since the {{mvar|n}}th root of the determinant is a [[homogeneous function]]. ==== Sum identity for 2×2 matrices ==== For the special case of <math>2\times 2</math> matrices with complex entries, the determinant of the sum can be written in terms of determinants and traces in the following identity: :<math>\det(A+B) = \det(A) + \det(B) + \text{tr}(A)\text{tr}(B) - \text{tr}(AB).</math>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)