Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Adjugate matrix
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|For a square matrix, the transpose of the cofactor matrix}} In [[linear algebra]], the '''adjugate''' or '''classical adjoint''' of a [[square matrix]] {{math|'''A'''}}, {{math|adj('''A''')}}, is the [[transpose]] of its [[cofactor matrix]].<ref>{{cite book |first=F. R. |last=Gantmacher |author-link=Felix Gantmacher |title=The Theory of Matrices |volume=1 |publisher=Chelsea |location=New York |year=1960 |isbn=0-8218-1376-5 |pages=76–89 |url=https://books.google.com/books?id=ePFtMw9v92sC&pg=PA76 }}</ref><ref>{{cite book |last=Strang |first=Gilbert |title=Linear Algebra and its Applications |publisher=Harcourt Brace Jovanovich |year=1988 |isbn=0-15-551005-3 |edition=3rd |pages=[https://archive.org/details/linearalgebraits00stra/page/231 231–232] |chapter=Section 4.4: Applications of determinants |author-link=Gilbert Strang |chapter-url=https://archive.org/details/linearalgebraits00stra/page/231 |chapter-url-access=registration}}</ref> It is occasionally known as '''adjunct matrix''',<ref>{{cite journal|author1=Claeyssen, J.C.R.|year=1990|title=On predicting the response of non-conservative linear vibrating systems by using dynamical matrix solutions|journal=Journal of Sound and Vibration|volume=140|issue=1|pages=73–84|doi=10.1016/0022-460X(90)90907-H|bibcode=1990JSV...140...73C }}</ref><ref>{{cite journal|author1=Chen, W.|author2=Chen, W.|author3=Chen, Y.J.|year=2004|title=A characteristic matrix approach for analyzing resonant ring lattice devices|journal=IEEE Photonics Technology Letters|volume=16|issue=2|pages=458–460|doi=10.1109/LPT.2003.823104|bibcode=2004IPTL...16..458C }}</ref> or "adjoint",<ref>{{cite book|first=Alston S.|last=Householder|title=The Theory of Matrices in Numerical Analysis |publisher=Dover Books on Mathematics|year=2006|author-link=Alston Scott Householder | isbn=0-486-44972-6 |pages=166–168 }}</ref> though that normally refers to a different concept, the [[Hermitian adjoint|adjoint operator]] which for a matrix is the [[conjugate transpose]]. The product of a matrix with its adjugate gives a [[diagonal matrix]] (entries not on the main diagonal are zero) whose diagonal entries are the [[determinant]] of the original matrix: :<math>\mathbf{A} \operatorname{adj}(\mathbf{A}) = \det(\mathbf{A}) \mathbf{I},</math> where {{math|'''I'''}} is the [[identity matrix]] of the same size as {{math|'''A'''}}. Consequently, the multiplicative inverse of an [[invertible matrix]] can be found by dividing its adjugate by its determinant. == Definition == The '''adjugate''' of {{math|'''A'''}} is the [[transpose]] of the [[cofactor matrix]] {{math|'''C'''}} of {{math|'''A'''}}, :<math>\operatorname{adj}(\mathbf{A}) = \mathbf{C}^\mathsf{T}.</math> In more detail, suppose {{math|''R''}} is a ([[Unital algebra|unital]]) [[commutative ring]] and {{math|'''A'''}} is an {{math|''n'' × ''n''}} matrix with entries from {{math|''R''}}. The {{math|(''i'', ''j'')}}-''[[minor (linear algebra)|minor]]'' of {{math|'''A'''}}, denoted {{math|'''M'''<sub>''ij''</sub>}}, is the [[determinant]] of the {{math|(''n'' − 1) × (''n'' − 1)}} matrix that results from deleting row {{mvar|i}} and column {{mvar|j}} of {{math|'''A'''}}. The [[Cofactor (linear algebra)#Inverse of a matrix|cofactor matrix]] of {{math|'''A'''}} is the {{math|''n'' × ''n''}} matrix {{math|'''C'''}} whose {{math|(''i'', ''j'')}} entry is the {{math|(''i'', ''j'')}} ''[[cofactor (linear algebra)|cofactor]]'' of {{math|'''A'''}}, which is the {{math|(''i'', ''j'')}}-minor times a sign factor: :<math>\mathbf{C} = \left((-1)^{i+j} \mathbf{M}_{ij}\right)_{1 \le i, j \le n}.</math> The adjugate of {{math|'''A'''}} is the transpose of {{math|'''C'''}}, that is, the {{math|''n'' × ''n''}} matrix whose {{math|(''i'', ''j'')}} entry is the {{math|(''j'',''i'')}} cofactor of {{math|'''A'''}}, :<math>\operatorname{adj}(\mathbf{A}) = \mathbf{C}^\mathsf{T} = \left((-1)^{i+j} \mathbf{M}_{ji}\right)_{1 \le i, j \le n}.</math> === Important consequence === The adjugate is defined so that the product of {{math|'''A'''}} with its adjugate yields a [[diagonal matrix]] whose diagonal entries are the determinant {{math|det('''A''')}}. That is, :<math>\mathbf{A} \operatorname{adj}(\mathbf{A}) = \operatorname{adj}(\mathbf{A}) \mathbf{A} = \det(\mathbf{A}) \mathbf{I},</math> where {{math|'''I'''}} is the {{math|''n'' × ''n''}} [[identity matrix]]. This is a consequence of the [[Laplace expansion]] of the determinant. The above formula implies one of the fundamental results in matrix algebra, that {{math|'''A'''}} is [[invertible matrix|invertible]] [[if and only if]] {{math|det('''A''')}} is an [[unit (ring theory)|invertible element]] of {{math|''R''}}. When this holds, the equation above yields :<math>\begin{align} \operatorname{adj}(\mathbf{A}) &= \det(\mathbf{A}) \mathbf{A}^{-1}, \\ \mathbf{A}^{-1} &= \det(\mathbf{A})^{-1} \operatorname{adj}(\mathbf{A}). \end{align}</math> == Examples == === 1 × 1 generic matrix === Since the determinant of a 0 × 0 matrix is 1, the adjugate of any 1 × 1 matrix ([[complex number|complex]] scalar) is <math>\mathbf{I} = \begin{bmatrix} 1 \end{bmatrix}</math>. Observe that <math>\mathbf{A} \operatorname{adj}(\mathbf{A}) = \operatorname{adj}(\mathbf{A})\mathbf{A} = (\det \mathbf{A}) \mathbf {I}.</math> === 2 × 2 generic matrix === The adjugate of the 2 × 2 matrix :<math>\mathbf{A} = \begin{bmatrix} a & b \\ c & d \end{bmatrix}</math> is :<math>\operatorname{adj}(\mathbf{A}) = \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}.</math> By direct computation, :<math>\mathbf{A} \operatorname{adj}(\mathbf{A}) = \begin{bmatrix} ad - bc & 0 \\ 0 & ad - bc \end{bmatrix} = (\det \mathbf{A})\mathbf{I}.</math> In this case, it is also true that {{math|det}}({{math|adj}}('''A''')) = {{math|det}}('''A''') and hence that {{math|adj}}({{math|adj}}('''A''')) = '''A'''. <!-- PLEASE DO NOT "CORRECT" WHAT IS NOT BROKEN. CHECK THE INVERSE FIRST. --> === 3 × 3 generic matrix === Consider a 3 × 3 matrix :<math>\mathbf{A} = \begin{bmatrix} a_{1} & a_{2} & a_{3} \\ b_{1} & b_{2} & b_{3} \\ c_{1} & c_{2} & c_{3} \end{bmatrix}.</math> Its cofactor matrix is :<math>\mathbf{C} = \begin{bmatrix} +\begin{vmatrix} b_{2} & b_{3} \\ c_{2} & c_{3} \end{vmatrix} & -\begin{vmatrix} b_{1} & b_{3} \\ c_{1} & c_{3} \end{vmatrix} & +\begin{vmatrix} b_{1} & b_{2} \\ c_{1} & c_{2} \end{vmatrix} \\ \\ -\begin{vmatrix} a_{2} & a_{3} \\ c_{2} & c_{3} \end{vmatrix} & +\begin{vmatrix} a_{1} & a_{3} \\ c_{1} & c_{3} \end{vmatrix} & -\begin{vmatrix} a_{1} & a_{2} \\ c_{1} & c_{2} \end{vmatrix} \\ \\ +\begin{vmatrix} a_{2} & a_{3} \\ b_{2} & b_{3} \end{vmatrix} & -\begin{vmatrix} a_{1} & a_{3} \\ b_{1} & b_{3} \end{vmatrix} & +\begin{vmatrix} a_{1} & a_{2} \\ b_{1} & b_{2} \end{vmatrix} \end{bmatrix},</math> where :<math>\begin{vmatrix} a & b \\ c & d \end{vmatrix} = \det\!\begin{bmatrix} a & b \\ c & d \end{bmatrix} .</math> Its adjugate is the transpose of its cofactor matrix, :<math>\operatorname{adj}(\mathbf{A}) = \mathbf{C}^\mathsf{T} = \begin{bmatrix} +\begin{vmatrix} b_{2} & b_{3} \\ c_{2} & c_{3} \end{vmatrix} & -\begin{vmatrix} a_{2} & a_{3} \\ c_{2} & c_{3} \end{vmatrix} & +\begin{vmatrix} a_{2} & a_{3} \\ b_{2} & b_{3} \end{vmatrix} \\ & & \\ -\begin{vmatrix} b_{1} & b_{3} \\ c_{1} & c_{3} \end{vmatrix} & +\begin{vmatrix} a_{1} & a_{3} \\ c_{1} & c_{3} \end{vmatrix} & -\begin{vmatrix} a_{1} & a_{3} \\ b_{1} & b_{3} \end{vmatrix} \\ & & \\ +\begin{vmatrix} b_{1} & b_{2} \\ c_{1} & c_{2} \end{vmatrix} & -\begin{vmatrix} a_{1} & a_{2} \\ c_{1} & c_{2} \end{vmatrix} & +\begin{vmatrix} a_{1} & a_{2} \\ b_{1} & b_{2} \end{vmatrix} \end{bmatrix}.</math> === 3 × 3 numeric matrix === As a specific example, we have :<math>\operatorname{adj}\!\begin{bmatrix} -3 & 2 & -5 \\ -1 & 0 & -2 \\ 3 & -4 & 1 \end{bmatrix} = \begin{bmatrix} -8 & 18 & -4 \\ -5 & 12 & -1 \\ 4 & -6 & 2 \end{bmatrix}.</math> It is easy to check the adjugate is the [[inverse matrix|inverse]] times the determinant, {{math|−6}}. The {{math|−1}} in the second row, third column of the adjugate was computed as follows. The (2,3) entry of the adjugate is the (3,2) cofactor of '''A'''. This cofactor is computed using the [[submatrix]] obtained by deleting the third row and second column of the original matrix '''A''', :<math>\begin{bmatrix} -3 & -5 \\ -1 & -2 \end{bmatrix}.</math> The (3,2) cofactor is a sign times the determinant of this submatrix: :<math>(-1)^{3+2}\operatorname{det}\!\begin{bmatrix}-3&-5\\-1&-2\end{bmatrix} = -(-3 \cdot -2 - -5 \cdot -1) = -1,</math> and this is the (2,3) entry of the adjugate. == Properties == For any {{math|''n'' × ''n''}} matrix {{math|'''A'''}}, elementary computations show that adjugates have the following properties: * <math>\operatorname{adj}(\mathbf{I}) = \mathbf{I}</math>, where <math>\mathbf{I}</math> is the [[identity matrix]]. * <math>\operatorname{adj}(\mathbf{0}) = \mathbf{0}</math>, where <math>\mathbf{0}</math> is the [[zero matrix]], except that if <math>n=1</math> then <math>\operatorname{adj}(\mathbf{0}) = \mathbf{I}</math>. * <math>\operatorname{adj}(c \mathbf{A}) = c^{n - 1}\operatorname{adj}(\mathbf{A})</math> for any scalar {{mvar|c}}. * <math>\operatorname{adj}(\mathbf{A}^\mathsf{T}) = \operatorname{adj}(\mathbf{A})^\mathsf{T}</math>. * <math>\det(\operatorname{adj}(\mathbf{A})) = (\det \mathbf{A})^{n-1}</math>. * If {{math|'''A'''}} is invertible, then <math>\operatorname{adj}(\mathbf{A}) = (\det \mathbf{A}) \mathbf{A}^{-1}</math>. It follows that: ** {{math|adj('''A''')}} is invertible with inverse {{math|(det '''A''')<sup>−1</sup>'''A'''}}. ** {{math|1=adj('''A'''<sup>−1</sup>) = adj('''A''')<sup>−1</sup>}}. * {{math|adj('''A''')}} is entrywise [[polynomial]] in {{math|'''A'''}}. In particular, over the [[real number|real]] or complex numbers, the adjugate is a [[smooth function]] of the entries of {{math|'''A'''}}. Over the complex numbers, * <math>\operatorname{adj}(\overline\mathbf{A}) = \overline{\operatorname{adj}(\mathbf{A})}</math>, where the bar denotes [[complex conjugation]]. * <math>\operatorname{adj}(\mathbf{A}^*) = \operatorname{adj}(\mathbf{A})^*</math>, where the asterisk denotes [[conjugate transpose]]. Suppose that {{math|'''B'''}} is another {{math|''n'' × ''n''}} matrix. Then :<math>\operatorname{adj}(\mathbf{AB}) = \operatorname{adj}(\mathbf{B})\operatorname{adj}(\mathbf{A}).</math> This can be [[mathematical proof|proved]] in three ways. One way, valid for any commutative ring, is a direct computation using the [[Cauchy–Binet formula]]. The second way, valid for the real or complex numbers, is to first observe that for invertible matrices {{math|'''A'''}} and {{math|'''B'''}}, :<math>\operatorname{adj}(\mathbf{B})\operatorname{adj}(\mathbf{A}) = (\det \mathbf{B})\mathbf{B}^{-1}(\det \mathbf{A})\mathbf{A}^{-1} = (\det \mathbf{AB})(\mathbf{AB})^{-1} = \operatorname{adj}(\mathbf{AB}).</math> Because every non-invertible matrix is the limit of invertible matrices, [[continuous function|continuity]] of the adjugate then implies that the formula remains true when one of {{math|'''A'''}} or {{math|'''B'''}} is not invertible. A [[corollary]] of the previous formula is that, for any non-negative [[integer]] {{mvar|k}}, :<math>\operatorname{adj}(\mathbf{A}^k) = \operatorname{adj}(\mathbf{A})^k.</math> If {{math|'''A'''}} is invertible, then the above formula also holds for negative {{mvar|k}}. From the identity :<math>(\mathbf{A} + \mathbf{B})\operatorname{adj}(\mathbf{A} + \mathbf{B})\mathbf{B} = \det(\mathbf{A} + \mathbf{B})\mathbf{B} = \mathbf{B}\operatorname{adj}(\mathbf{A} + \mathbf{B})(\mathbf{A} + \mathbf{B}),</math> we deduce :<math>\mathbf{A}\operatorname{adj}(\mathbf{A} + \mathbf{B})\mathbf{B} = \mathbf{B}\operatorname{adj}(\mathbf{A} + \mathbf{B})\mathbf{A}.</math> Suppose that {{math|'''A'''}} [[commuting matrices|commutes]] with {{math|'''B'''}}. Multiplying the identity {{math|1='''AB''' = '''BA'''}} on the left and right by {{math|adj('''A''')}} proves that :<math>\det(\mathbf{A})\operatorname{adj}(\mathbf{A})\mathbf{B} = \det(\mathbf{A})\mathbf{B}\operatorname{adj}(\mathbf{A}).</math> If {{math|'''A'''}} is invertible, this implies that {{math|adj('''A''')}} also commutes with {{math|'''B'''}}. Over the real or complex numbers, continuity implies that {{math|adj('''A''')}} commutes with {{math|'''B'''}} even when {{math|'''A'''}} is not invertible. Finally, there is a more general proof than the second proof, which only requires that an ''n'' × ''n'' matrix has entries over a [[field (mathematics)|field]] with at least 2''n'' + 1 elements (e.g. a 5 × 5 matrix over the integers [[modular arithmetic|modulo]] 11). {{math|det('''A'''+''t'''''I''')}} is a polynomial in ''t'' with [[degree of a polynomial|degree]] at most ''n'', so it has at most ''n'' [[root of a polynomial|roots]]. Note that the ''ij''th entry of {{math|adj(('''A'''+''t'''''I''')('''B'''))}} is a polynomial of at most order ''n'', and likewise for {{math|adj('''A'''+''t'''''I''')adj('''B''')}}. These two polynomials at the ''ij''th entry agree on at least ''n'' + 1 points, as we have at least ''n'' + 1 elements of the field where {{math|'''A'''+''t'''''I'''}} is invertible, and we have proven the identity for invertible matrices. Polynomials of degree ''n'' which agree on ''n'' + 1 points must be identical (subtract them from each other and you have ''n'' + 1 roots for a polynomial of degree at most ''n'' – a contradiction unless their difference is identically zero). As the two polynomials are identical, they take the same value for every value of ''t''. Thus, they take the same value when ''t'' = 0. Using the above properties and other elementary computations, it is straightforward to show that if {{math|'''A'''}} has one of the following properties, then {{math|adj'''A'''}} does as well: * [[Upper triangular matrix|upper triangular]], * [[Lower triangular matrix|lower triangular]], * [[Diagonal matrix|diagonal]], * [[Orthogonal matrix|orthogonal]], * [[Unitary matrix|unitary]], * [[Symmetric matrix|symmetric]], * [[Hermitian matrix|Hermitian]], * [[Normal matrix|normal]]. If {{math|'''A'''}} is [[Skew-symmetric matrix|skew-symmetric]], then {{math|adj('''A''')}} is skew-symmetric for even ''n'' and symmetric for odd ''n''. Similarly, if {{math|'''A'''}} is [[Skew-Hermitian matrix|skew-Hermitian]], then {{math|adj('''A''')}} is skew-Hermitian for even ''n'' and Hermitian for odd ''n''. If {{math|'''A'''}} is invertible, then, as noted above, there is a formula for {{math|adj('''A''')}} in terms of the determinant and inverse of {{math|'''A'''}}. When {{math|'''A'''}} is not invertible, the adjugate satisfies different but closely related formulas. * If {{math|1=rk('''A''') ≤ ''n'' − 2}}, then {{math|1=adj('''A''') = '''0'''}}. * If {{math|1=rk('''A''') = ''n'' − 1}}, then {{math|1=rk(adj('''A''')) = 1}}. (Some minor is non-zero, so {{math|adj('''A''')}} is non-zero and hence has [[rank (linear algebra)|rank]] at least one; the identity {{math|1=adj('''A''')'''A''' = '''0'''}} implies that the [[dimension (vector space)|dimension]] of the [[nullspace]] of {{math|adj('''A''')}} is at least {{math|''n'' − 1}}, so its rank is at most one.) It follows that {{math|1=adj('''A''') = ''α'''''xy'''<sup>T</sup>}}, where {{math|''α''}} is a scalar and {{math|'''x'''}} and {{math|'''y'''}} are vectors such that {{math|1='''Ax''' = '''0'''}} and {{math|1='''A'''<sup>T</sup> '''y''' = '''0'''}}. === Column substitution and Cramer's rule === {{see also|Cramer's rule}} Partition {{math|'''A'''}} into [[column vector]]s: :<math>\mathbf{A} = \begin{bmatrix}\mathbf{a}_1 & \cdots & \mathbf{a}_n\end{bmatrix}.</math> Let {{math|'''b'''}} be a column vector of size {{math|''n''}}. Fix {{math|1 ≤ ''i'' ≤ ''n''}} and consider the matrix formed by replacing column {{math|''i''}} of {{math|'''A'''}} by {{math|'''b'''}}: :<math>(\mathbf{A} \stackrel{i}{\leftarrow} \mathbf{b})\ \stackrel{\text{def}}{=}\ \begin{bmatrix} \mathbf{a}_1 & \cdots & \mathbf{a}_{i-1} & \mathbf{b} & \mathbf{a}_{i+1} & \cdots & \mathbf{a}_n \end{bmatrix}.</math> Laplace expand the determinant of this matrix along column {{mvar|i}}. The result is entry {{mvar|i}} of the product {{math|adj('''A''')'''b'''}}. Collecting these determinants for the different possible {{mvar|i}} yields an equality of column vectors :<math>\left(\det(\mathbf{A} \stackrel{i}{\leftarrow} \mathbf{b})\right)_{i=1}^n = \operatorname{adj}(\mathbf{A})\mathbf{b}.</math> This formula has the following concrete consequence. Consider the [[linear system of equations]] :<math>\mathbf{A}\mathbf{x} = \mathbf{b}.</math> Assume that {{math|'''A'''}} is [[singular matrix|non-singular]]. Multiplying this system on the left by {{math|adj('''A''')}} and dividing by the determinant yields :<math>\mathbf{x} = \frac{\operatorname{adj}(\mathbf{A})\mathbf{b}}{\det \mathbf{A}}.</math> Applying the previous formula to this situation yields '''Cramer's rule''', :<math>x_i = \frac{\det(\mathbf{A} \stackrel{i}{\leftarrow} \mathbf{b})}{\det \mathbf{A}},</math> where {{math|''x''<sub>''i''</sub>}} is the {{mvar|i}}th entry of {{math|'''x'''}}. === Characteristic polynomial === Let the [[characteristic polynomial]] of {{math|'''A'''}} be :<math>p(s) = \det(s\mathbf{I} - \mathbf{A}) = \sum_{i=0}^n p_i s^i \in R[s].</math> The first [[divided difference]] of {{math|''p''}} is a [[symmetric polynomial]] of degree {{math|''n'' − 1}}, :<math>\Delta p(s, t) = \frac{p(s) - p(t)}{s - t} = \sum_{0 \le j + k < n} p_{j+k+1} s^j t^k \in R[s, t].</math> Multiply {{math|''s'''''I''' − '''A'''}} by its adjugate. Since {{math|1=''p''('''A''') = '''0'''}} by the [[Cayley–Hamilton theorem]], some elementary manipulations reveal :<math>\operatorname{adj}(s\mathbf{I} - \mathbf{A}) = \Delta p(s\mathbf{I}, \mathbf{A}).</math> In particular, the [[resolvent formalism|resolvent]] of {{math|'''A'''}} is defined to be :<math>R(z; \mathbf{A}) = (z\mathbf{I} - \mathbf{A})^{-1},</math> and by the above formula, this is equal to :<math>R(z; \mathbf{A}) = \frac{\Delta p(z\mathbf{I}, \mathbf{A})}{p(z)}.</math> === Jacobi's formula === {{main|Jacobi's formula}} The adjugate also appears in [[Jacobi's formula]] for the [[derivative]] of the determinant. If {{math|'''A'''(''t'')}} is [[continuously differentiable]], then :<math>\frac{d(\det \mathbf{A})}{dt}(t) = \operatorname{tr}\left(\operatorname{adj}(\mathbf{A}(t)) \mathbf{A}'(t)\right).</math> It follows that the [[total derivative]] of the determinant is the transpose of the adjugate: :<math>d(\det \mathbf{A})_{\mathbf{A}_0} = \operatorname{adj}(\mathbf{A}_0)^{\mathsf{T}}.</math> === Cayley–Hamilton formula === {{main|Cayley–Hamilton theorem}} Let {{math|''p''<sub>'''A'''</sub>(''t'')}} be the characteristic polynomial of {{math|'''A'''}}. The [[Cayley–Hamilton theorem]] states that :<math>p_{\mathbf{A}}(\mathbf{A}) = \mathbf{0}.</math> Separating the constant term and multiplying the equation by {{math|adj('''A''')}} gives an expression for the adjugate that depends only on {{math|'''A'''}} and the coefficients of {{math|''p''<sub>'''A'''</sub>(''t'')}}. These coefficients can be explicitly represented in terms of [[trace (linear algebra)|traces]] of powers of {{math|'''A'''}} using complete exponential [[Bell polynomials]]. The resulting formula is :<math>\operatorname{adj}(\mathbf{A}) = \sum_{s=0}^{n-1} \mathbf{A}^{s} \sum_{k_1, k_2, \ldots, k_{n-1}} \prod_{\ell=1}^{n-1} \frac{(-1)^{k_\ell+1}}{\ell^{k_\ell}k_{\ell}!}\operatorname{tr}(\mathbf{A}^\ell)^{k_\ell},</math> where {{mvar|n}} is the dimension of {{math|'''A'''}}, and the sum is taken over {{mvar|s}} and all sequences of {{math|''k<sub>l</sub>'' ≥ 0}} satisfying the linear [[Diophantine equation]] :<math>s+\sum_{\ell=1}^{n-1}\ell k_\ell = n - 1.</math> For the 2 × 2 case, this gives :<math>\operatorname{adj}(\mathbf{A})=\mathbf{I}_2(\operatorname{tr}\mathbf{A}) - \mathbf{A}.</math> For the 3 × 3 case, this gives :<math>\operatorname{adj}(\mathbf{A})=\frac{1}{2}\mathbf{I}_3\!\left( (\operatorname{tr}\mathbf{A})^2-\operatorname{tr}\mathbf{A}^2\right) - \mathbf{A}(\operatorname{tr}\mathbf{A}) + \mathbf{A}^2 .</math> For the 4 × 4 case, this gives :<math>\operatorname{adj}(\mathbf{A})= \frac{1}{6}\mathbf{I}_4\!\left( (\operatorname{tr}\mathbf{A})^3 - 3\operatorname{tr}\mathbf{A}\operatorname{tr}\mathbf{A}^2 + 2\operatorname{tr}\mathbf{A}^{3} \right) - \frac{1}{2}\mathbf{A}\!\left( (\operatorname{tr}\mathbf{A})^2 - \operatorname{tr}\mathbf{A}^2\right) + \mathbf{A}^2(\operatorname{tr}\mathbf{A}) - \mathbf{A}^3.</math> The same formula follows directly from the terminating step of the [[Faddeev–LeVerrier algorithm]], which efficiently determines the [[characteristic polynomial]] of {{math|'''A'''}}. In general, adjugate matrix of arbitrary dimension N matrix can be computed by Einstein's convention. :<math>(\operatorname{adj}(\mathbf{A}))_{i_N}^{j_N} = \frac{1}{(N-1)!} \epsilon_{i_1 i_2 \ldots i_N} \epsilon^{j_1 j_2 \ldots j_N} A_{j_1}^{i_1} A_{j_2}^{i_2} \ldots A_{j_{N-1}}^{i_{N-1}} </math> == Relation to exterior algebras == The adjugate can be viewed in abstract terms using [[exterior algebra]]s. Let {{math|''V''}} be an {{math|''n''}}-dimensional [[vector space]]. The [[exterior product]] defines a bilinear pairing <math display=block>V \times \wedge^{n-1} V \to \wedge^n V.</math> Abstractly, <math>\wedge^n V</math> is [[isomorphic]] to {{math|'''R'''}}, and under any such isomorphism the exterior product is a [[perfect pairing]]. That is, it yields an isomorphism <math display=block>\phi \colon V\ \xrightarrow{\cong}\ \operatorname{Hom}(\wedge^{n-1} V, \wedge^n V).</math> This isomorphism sends each {{math|'''v''' ∈ ''V''}} to the map <math>\phi_{\mathbf{v}}</math> defined by <math display=block>\phi_\mathbf{v}(\alpha) = \mathbf{v} \wedge \alpha.</math> Suppose that {{math|''T'' : ''V'' → ''V''}} is a [[linear transformation]]. [[Pullback]] by the {{math|(''n'' − 1)}}th exterior power of {{math|''T''}} induces a morphism of {{math|Hom}} spaces. The '''adjugate''' of {{math|''T''}} is the composite <math display=block>V\ \xrightarrow{\phi}\ \operatorname{Hom}(\wedge^{n-1} V, \wedge^n V)\ \xrightarrow{(\wedge^{n-1} T)^*}\ \operatorname{Hom}(\wedge^{n-1} V, \wedge^n V)\ \xrightarrow{\phi^{-1}}\ V.</math> If {{math|1=''V'' = '''R'''<sup>''n''</sup>}} is endowed with its [[canonical basis]] {{math|'''e'''<sub>1</sub>, ..., '''e'''<sub>''n''</sub>}}, and if the matrix of {{math|''T''}} in this [[basis (linear algebra)|basis]] is {{math|'''A'''}}, then the adjugate of {{math|''T''}} is the adjugate of {{math|'''A'''}}. To see why, give <math>\wedge^{n-1} \mathbf{R}^n</math> the basis <math display=block>\{\mathbf{e}_1 \wedge \dots \wedge \hat\mathbf{e}_k \wedge \dots \wedge \mathbf{e}_n\}_{k=1}^n.</math> Fix a basis vector {{math|'''e'''<sub>''i''</sub>}} of {{math|'''R'''<sup>''n''</sup>}}. The image of {{math|'''e'''<sub>''i''</sub>}} under <math>\phi</math> is determined by where it sends basis vectors: <math display=block>\phi_{\mathbf{e}_i}(\mathbf{e}_1 \wedge \dots \wedge \hat\mathbf{e}_k \wedge \dots \wedge \mathbf{e}_n) = \begin{cases} (-1)^{i-1} \mathbf{e}_1 \wedge \dots \wedge \mathbf{e}_n, &\text{if}\ k = i, \\ 0 &\text{otherwise.} \end{cases}</math> On basis vectors, the {{math|(''n'' − 1)}}st exterior power of {{math|''T''}} is <math display=block>\mathbf{e}_1 \wedge \dots \wedge \hat\mathbf{e}_j \wedge \dots \wedge \mathbf{e}_n \mapsto \sum_{k=1}^n (\det A_{jk}) \mathbf{e}_1 \wedge \dots \wedge \hat\mathbf{e}_k \wedge \dots \wedge \mathbf{e}_n.</math> Each of these terms maps to zero under <math>\phi_{\mathbf{e}_i}</math> except the {{math|1=''k'' = ''i''}} term. Therefore, the pullback of <math>\phi_{\mathbf{e}_i}</math> is the linear transformation for which <math display=block>\mathbf{e}_1 \wedge \dots \wedge \hat\mathbf{e}_j \wedge \dots \wedge \mathbf{e}_n \mapsto (-1)^{i-1} (\det A_{ji}) \mathbf{e}_1 \wedge \dots \wedge \mathbf{e}_n.</math> That is, it equals <math display=block>\sum_{j=1}^n (-1)^{i+j} (\det A_{ji})\phi_{\mathbf{e}_j}.</math> Applying the inverse of <math>\phi</math> shows that the adjugate of {{math|''T''}} is the linear transformation for which <math display=block>\mathbf{e}_i \mapsto \sum_{j=1}^n (-1)^{i+j}(\det A_{ji})\mathbf{e}_j.</math> Consequently, its matrix representation is the adjugate of {{math|'''A'''}}. If {{math|''V''}} is endowed with an [[inner product]] and a volume form, then the map {{math|''φ''}} can be decomposed further. In this case, {{math|''φ''}} can be understood as the composite of the [[Hodge star operator]] and dualization. Specifically, if {{math|ω}} is the volume form, then it, together with the inner product, determines an isomorphism <math display=block>\omega^\vee \colon \wedge^n V \to \mathbf{R}.</math> This induces an isomorphism <math display=block>\operatorname{Hom}(\wedge^{n-1} \mathbf{R}^n, \wedge^n \mathbf{R}^n) \cong \wedge^{n-1} (\mathbf{R}^n)^\vee.</math> A vector {{math|'''v'''}} in {{math|'''R'''<sup>''n''</sup>}} corresponds to the linear functional <math display=block>(\alpha \mapsto \omega^\vee(\mathbf{v} \wedge \alpha)) \in \wedge^{n-1} (\mathbf{R}^n)^\vee.</math> By the definition of the Hodge star operator, this linear functional is dual to {{math|*'''v'''}}. That is, {{math|ω<sup>∨</sup>∘ φ}} equals {{math|'''v''' ↦ *'''v'''<sup>∨</sup>}}. == Higher adjugates == Let {{math|'''A'''}} be an {{math|''n'' × ''n''}} matrix, and fix {{math|''r'' ≥ 0}}. The '''{{math|''r''}}th higher adjugate''' of {{math|'''A'''}} is an <math display="inline">\binom{n}{r} \!\times\! \binom{n}{r}</math> matrix, denoted {{math|adj<sub>''r''</sub> '''A'''}}, whose entries are indexed by size {{math|''r''}} [[subset]]s {{math|''I''}} and {{math|''J''}} of {{math|{1, ..., ''m''<nowiki>}</nowiki>}} {{Citation needed|date=November 2023}}. Let {{math|''I''{{i sup|c}}}} and {{math|''J''{{i sup|c}}}} denote the [[complement (set theory)|complements]] of {{math|''I''}} and {{math|''J''}}, respectively. Also let <math>\mathbf{A}_{I^c, J^c}</math> denote the submatrix of {{math|'''A'''}} containing those rows and columns whose indices are in {{math|''I''{{i sup|c}}}} and {{math|''J''{{i sup|c}}}}, respectively. Then the {{math|(''I'', ''J'')}} entry of {{math|adj<sub>''r''</sub> '''A'''}} is :<math>(-1)^{\sigma(I) + \sigma(J)}\det \mathbf{A}_{J^c, I^c},</math> where {{math|σ(''I'')}} and {{math|σ(''J'')}} are the sum of the elements of {{math|''I''}} and {{math|''J''}}, respectively. Basic properties of higher adjugates include {{Citation needed|date=November 2023}}: * {{math|1=adj<sub>0</sub>('''A''') = det '''A'''}}. * {{math|1=adj<sub>1</sub>('''A''') = adj '''A'''}}. * {{math|1=adj<sub>''n''</sub>('''A''') = 1}}. * {{math|1=adj<sub>''r''</sub>('''BA''') = adj<sub>''r''</sub>('''A''') adj<sub>''r''</sub>('''B''')}}. * <math>\operatorname{adj}_r(\mathbf{A})C_r(\mathbf{A}) = C_r(\mathbf{A})\operatorname{adj}_r(\mathbf{A}) = (\det \mathbf{A})I_{\binom{n}{r}}</math>, where {{math|''C''<sub>''r''</sub>('''A''')}} denotes the {{math|''r''}}th [[compound matrix]]. Higher adjugates may be defined in abstract algebraic terms in a similar fashion to the usual adjugate, substituting <math>\wedge^r V</math> and <math>\wedge^{n-r} V</math> for <math>V</math> and <math>\wedge^{n-1} V</math>, respectively. == Iterated adjugates == [[Iterated function|Iteratively]] taking the adjugate of an invertible matrix '''A''' {{mvar|k}} times yields :<math>\overbrace{\operatorname{adj}\dotsm\operatorname{adj}}^k(\mathbf{A})=\det(\mathbf{A})^{\frac{(n-1)^k-(-1)^k}n}\mathbf{A}^{(-1)^k},</math> :<math>\det(\overbrace{\operatorname{adj}\dotsm\operatorname{adj}}^k(\mathbf{A}))=\det(\mathbf{A})^{(n-1)^k}.</math> For example, :<math>\operatorname{adj}(\operatorname{adj}(\mathbf{A})) = \det(\mathbf{A})^{n - 2} \mathbf{A}.</math> :<math>\det(\operatorname{adj}(\operatorname{adj}(\mathbf{A}))) = \det(\mathbf{A})^{(n - 1)^2}.</math> == See also == * [[Cayley–Hamilton theorem]] * [[Cramer's rule]] * [[Trace diagram]] * [[Jacobi's formula]] * [[Faddeev–LeVerrier algorithm]] * [[Compound matrix]] == References == {{Reflist}} == Bibliography == * Roger A. Horn and Charles R. Johnson (2013), ''Matrix Analysis'', Second Edition. Cambridge University Press, {{ISBN|978-0-521-54823-6}} * Roger A. Horn and Charles R. Johnson (1991), ''Topics in Matrix Analysis''. Cambridge University Press, {{ISBN|978-0-521-46713-1}} == External links == * [http://www.ee.ic.ac.uk/hp/staff/dmb/matrix/property.html#adjoint Matrix Reference Manual] *[http://www.elektro-energetika.cz/calculations/matreg.php?language=english Online matrix calculator (determinant, track, inverse, adjoint, transpose)] Compute Adjugate matrix up to order 8 * {{cite web|url=http://www.wolframalpha.com/input/?i=adjugate+of+{+{+a%2C+b%2C+c+}%2C+{+d%2C+e%2C+f+}%2C+{+g%2C+h%2C+i+}+}|archive-url=|last=|first=|date=|title=<nowiki>Adjugate of { { a, b, c }, { d, e, f }, { g, h, i } }</nowiki>|archive-date=|access-date=|work=[[Wolfram Alpha]]}} {{Matrix classes}} [[Category:Matrix theory]] [[Category:Linear algebra]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Citation needed
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:ISBN
(
edit
)
Template:Main
(
edit
)
Template:Math
(
edit
)
Template:Matrix classes
(
edit
)
Template:Mvar
(
edit
)
Template:Reflist
(
edit
)
Template:See also
(
edit
)
Template:Short description
(
edit
)