Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Cayley–Hamilton theorem
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== A synthesis of the first two proofs === In the first proof, one was able to determine the coefficients {{math|''B''<sub>''i''</sub>}} of {{math|''B''}} based on the right-hand fundamental relation for the adjugate only. In fact the first {{math|''n''}} equations derived can be interpreted as determining the quotient {{math|''B''}} of the [[Euclidean division]] of the polynomial {{math|''p''(''t'')''I<sub>n</sub>''}} on the left by the [[monic polynomial]] {{math|''I<sub>n</sub>t'' − ''A''}}, while the final equation expresses the fact that the remainder is zero. This division is performed in the ring of polynomials with matrix coefficients. Indeed, even over a non-commutative ring, Euclidean division by a monic polynomial {{math|''P''}} is defined, and always produces a unique quotient and remainder with the same [[degree of a polynomial|degree]] condition as in the commutative case, provided it is specified at which side one wishes {{math|''P''}} to be a factor (here that is to the left). To see that quotient and remainder are unique (which is the important part of the statement here), it suffices to write <math>PQ+r = PQ'+r'</math> as <math>P(Q-Q') = r'-r</math> and observe that since {{math|''P''}} is monic, {{math|''P''(''Q''−''Q''′)}} cannot have a degree less than that of {{math|''P''}}, unless {{math|''Q'' {{=}} ''Q''′}}. But the dividend {{math|''p''(''t'')''I<sub>n</sub>''}} and divisor {{math|''I<sub>n</sub>t'' − ''A''}} used here both lie in the subring {{math|(''R''[''A''])[''t'']}}, where {{math|''R''[''A'']}} is the subring of the matrix ring {{math|''M''(''n'', ''R'')}} generated by {{math|''A''}}: the {{math|''R''}}-linear [[linear span|span]] of all powers of {{math|''A''}}. Therefore, the Euclidean division can in fact be performed within that ''commutative'' polynomial ring, and of course it then gives the same quotient {{math|''B''}} and remainder 0 as in the larger ring; in particular this shows that {{math|''B''}} in fact lies in {{math|(''R''[''A''])[''t'']}}. But, in this commutative setting, it is valid to set {{math|''t''}} to {{math|''A''}} in the equation <math display="block">p(t)I_n=(tI_n-A)B;</math> in other words, to apply the evaluation map <math display="block">\operatorname{ev}_A:(R[A])[t]\to R[A]</math> which is a ring homomorphism, giving <math display="block">p(A)=0\cdot\operatorname{ev}_A(B)=0</math> just like in the second proof, as desired. In addition to proving the theorem, the above argument tells us that the coefficients {{math| ''B<sub>i</sub>''}} of {{math|''B''}} are polynomials in {{math|''A''}}, while from the second proof we only knew that they lie in the centralizer {{math|''Z''}} of {{math|''A''}}; in general {{math|''Z''}} is a larger subring than {{math|''R''[''A'']}}, and not necessarily commutative. In particular the constant term {{math|1=''B''<sub>0</sub> = adj(−''A'')}} lies in {{math|''R''[''A'']}}. Since {{math|''A''}} is an arbitrary square matrix, this proves that {{math|adj(''A'')}} can always be expressed as a polynomial in {{math|''A''}} (with coefficients that depend on {{math|''A'')}}. In fact, the equations found in the first proof allow successively expressing <math>B_{n-1}, \ldots, B_1, B_0</math> as polynomials in {{math|''A''}}, which leads to the identity {{Equation box 1 |indent =:: |equation =<math>\operatorname{adj}(-A)=\sum_{i=1}^nc_iA^{i-1},</math> |cellpadding= 6 |border |border colour = #0070BF |bgcolor=#FAFFFB}} valid for all {{math|''n'' × ''n''}} matrices, where <math display="block">p(t)=t^n+c_{n-1}t^{n-1}+\cdots+c_1t+c_0</math> is the characteristic polynomial of {{mvar|A}}. Note that this identity also implies the statement of the Cayley–Hamilton theorem: one may move {{math|adj(−''A'')}} to the right hand side, multiply the resulting equation (on the left or on the right) by {{math|''A''}}, and use the fact that <math display="block">-A\cdot \operatorname{adj}(-A) = \operatorname{adj}(-A)\cdot (-A) = \det(-A) I_n = c_0I_n.</math> {{see also|Faddeev–LeVerrier algorithm}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)