Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Minor (linear algebra)
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Inverse of a matrix=== {{main|Invertible matrix}} One can write down the inverse of an [[invertible matrix]] by computing its cofactors by using [[Cramer's rule]], as follows. The matrix formed by all of the cofactors of a square matrix {{math|'''A'''}} is called the '''cofactor matrix''' (also called the '''matrix of cofactors''' or, sometimes, ''comatrix''): <math display=block>\mathbf C = \begin{bmatrix} C_{11} & C_{12} & \cdots & C_{1n} \\ C_{21} & C_{22} & \cdots & C_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ C_{n1} & C_{n2} & \cdots & C_{nn} \end{bmatrix} </math> Then the inverse of {{math|'''A'''}} is the transpose of the cofactor matrix times the reciprocal of the determinant of {{math|'''A'''}}: <math display=block>\mathbf A^{-1} = \frac{1}{\operatorname{det}(\mathbf A)} \mathbf C^\mathsf{T}.</math> The transpose of the cofactor matrix is called the [[adjugate]] matrix (also called the ''classical adjoint'') of {{math|'''A'''}}. The above formula can be generalized as follows: Let <math display=block>\begin{align} I &= 1 \le i_1 < i_2 < \ldots < i_k \le n, \\[2pt] J &= 1 \le j_1 < j_2 < \ldots < j_k \le n, \end{align}</math> be ordered sequences (in natural order) of indexes (here {{math|'''A'''}} is an {{math|''n'' Γ ''n''}} matrix). Then<ref name="Prasolov1994">{{cite book|author=Viktor Vasil_evich Prasolov|title=Problems and Theorems in Linear Algebra|url=https://books.google.com/books?id=b4yKAwAAQBAJ&pg=PR15|date=13 June 1994|publisher=American Mathematical Soc.|isbn=978-0-8218-0236-6|pages=15β}}</ref> <math display=block>[\mathbf A^{-1}]_{I,J} = \pm\frac{[\mathbf A]_{J',I'}}{\det \mathbf A},</math> where {{math|''Iβ²'', ''Jβ²''}} denote the ordered sequences of indices (the indices are in natural order of magnitude, as above) complementary to {{math|''I'', ''J''}}, so that every index {{math|1, ..., ''n''}} appears exactly once in either {{mvar|I}} or {{mvar|I'}}, but not in both (similarly for the {{mvar|J}} and {{mvar|J'}}) and {{math|['''A''']<sub>''I'', ''J''</sub>}} denotes the determinant of the submatrix of {{math|'''A'''}} formed by choosing the rows of the index set {{mvar|I}} and columns of index set {{mvar|J}}. Also, <math>[\mathbf A]_{I,J} = \det \bigl( (A_{i_p, j_q})_{p,q = 1, \ldots, k} \bigr).</math> A simple proof can be given using wedge product. Indeed, <math display=block>\bigl[ \mathbf A^{-1} \bigr]_{I,J} (e_1\wedge\ldots \wedge e_n) = \pm(\mathbf A^{-1}e_{j_1})\wedge \ldots \wedge(\mathbf A^{-1}e_{j_k})\wedge e_{i'_1}\wedge\ldots \wedge e_{i'_{n-k}}, </math> where <math>e_1, \ldots, e_n</math> are the basis vectors. Acting by {{math|'''A'''}} on both sides, one gets <math display=block>\begin{align} &\ \bigl[\mathbf A^{-1} \bigr]_{I,J} \det \mathbf A (e_1\wedge\ldots \wedge e_n) \\[2pt] =&\ \pm (e_{j_1})\wedge \ldots \wedge(e_{j_k})\wedge (\mathbf A e_{i'_1})\wedge\ldots \wedge (\mathbf A e_{i'_{n-k}}) \\[2pt] =&\ \pm [\mathbf A]_{J',I'}(e_1\wedge\ldots \wedge e_n). \end{align}</math> The sign can be worked out to be <math display=block>(-1)^\wedge \!\!\left( \sum_{s=1}^{k} i_s - \sum_{s=1}^{k} j_s \right),</math> so the sign is determined by the sums of elements in {{mvar|I}} and {{mvar|J}}.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)