Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Invertible matrix
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Analytic solution === {{Main article|Cramer's rule}} Writing the transpose of the [[matrix of cofactors]], known as an [[adjugate matrix]], may also be an efficient way to calculate the inverse of ''small'' matrices, but the [[Recursion|recursive]] method is inefficient for large matrices. To determine the inverse, we calculate a matrix of cofactors: : <math>\mathbf{A}^{-1} = {1 \over \begin{vmatrix}\mathbf{A}\end{vmatrix}}\mathbf{C}^\mathrm{T} = {1 \over \begin{vmatrix}\mathbf{A}\end{vmatrix}} \begin{pmatrix} \mathbf{C}_{11} & \mathbf{C}_{21} & \cdots & \mathbf{C}_{n1} \\ \mathbf{C}_{12} & \mathbf{C}_{22} & \cdots & \mathbf{C}_{n2} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{C}_{1n} & \mathbf{C}_{2n} & \cdots & \mathbf{C}_{nn} \\ \end{pmatrix} </math> so that : <math>\left(\mathbf{A}^{-1}\right)_{ij} = {1 \over \begin{vmatrix}\mathbf{A}\end{vmatrix}}\left(\mathbf{C}^{\mathrm{T}}\right)_{ij} = {1 \over \begin{vmatrix}\mathbf{A}\end{vmatrix}}\left(\mathbf{C}_{ji}\right) </math> where {{math|{{abs|'''A'''}}}} is the [[determinant]] of {{math|'''A'''}}, {{math|'''C'''}} is the matrix of cofactors, and {{math|'''C'''<sup>T</sup>}} represents the matrix [[transpose]]. ==== Inversion of 2 × 2 matrices ==== The ''cofactor equation'' listed above yields the following result for {{nowrap|2 × 2}} matrices. Inversion of these matrices can be done as follows:<ref>{{cite book |title=Introduction to linear algebra |edition=3rd |first1=Gilbert |last1=Strang |publisher=SIAM |year=2003 |isbn=978-0-9614088-9-3 |page=71 |url=https://books.google.com/books?id=Gv4pCVyoUVYC }}, [https://books.google.com/books?id=Gv4pCVyoUVYC&pg=PA71 Chapter 2, page 71] </ref> : <math>\mathbf{A}^{-1} = \begin{bmatrix} a & b \\ c & d \\ \end{bmatrix}^{-1} = \frac{1}{\det \mathbf{A}} \begin{bmatrix} \,\,\,d & \!\!-b \\ -c & \,a \\ \end{bmatrix} = \frac{1}{ad - bc} \begin{bmatrix} \,\,\,d & \!\!-b \\ -c & \,a \\ \end{bmatrix}. </math> This is possible because {{math|1/(''ad'' − ''bc'')}} is the [[reciprocal (mathematics)|reciprocal]] of the determinant of the matrix in question, and the same strategy could be used for other matrix sizes. The Cayley–Hamilton method gives : <math>\mathbf{A}^{-1} = \frac{1}{\det \mathbf{A}} \left[ \left( \operatorname{tr}\mathbf{A} \right) \mathbf{I} - \mathbf{A} \right] .</math> ==== Inversion of 3 × 3 matrices ==== A [[computationally efficient]] {{nowrap|3 × 3}} matrix inversion is given by : <math>\mathbf{A}^{-1} = \begin{bmatrix} a & b & c\\ d & e & f \\ g & h & i\\ \end{bmatrix}^{-1} = \frac{1}{\det(\mathbf{A})} \begin{bmatrix} \, A & \, B & \,C \\ \, D & \, E & \, F \\ \, G & \, H & \, I\\ \end{bmatrix}^\mathrm{T} = \frac{1}{\det(\mathbf{A})} \begin{bmatrix} \, A & \, D & \,G \\ \, B & \, E & \,H \\ \, C & \,F & \, I\\ \end{bmatrix} </math> (where the [[Scalar (mathematics)|scalar]] {{mvar|A}} is not to be confused with the matrix {{math|'''A'''}}). If the determinant is non-zero, the matrix is invertible, with the entries of the intermediary matrix on the right side above given by : <math>\begin{alignat}{6} A &={}& (ei - fh), &\quad& D &={}& -(bi - ch), &\quad& G &={}& (bf - ce), \\ B &={}& -(di - fg), &\quad& E &={}& (ai - cg), &\quad& H &={}& -(af - cd), \\ C &={}& (dh - eg), &\quad& F &={}& -(ah - bg), &\quad& I &={}& (ae - bd). \\ \end{alignat}</math> The determinant of {{math|'''A'''}} can be computed by applying the [[rule of Sarrus]] as follows: : <math>\det(\mathbf{A}) = aA + bB + cC.</math> The Cayley–Hamilton decomposition gives : <math>\mathbf{A}^{-1} = \frac{1}{\det (\mathbf{A})}\left( \tfrac{1}{2}\left[ (\operatorname{tr}\mathbf{A})^{2} - \operatorname{tr}(\mathbf{A}^{2})\right] \mathbf{I} - \mathbf{A}\operatorname{tr}\mathbf{A} + \mathbf{A}^{2}\right). </math> {{anchor|Inversion of 3×3 matrices based on vector products}} The general {{nowrap|3 × 3}} inverse can be expressed concisely in terms of the [[cross product]] and [[triple product]]. If a matrix <math>\mathbf{A} = \begin{bmatrix} \mathbf{x}_0 & \mathbf{x}_1 & \mathbf{x}_2\end{bmatrix}</math> (consisting of three column vectors, <math>\mathbf{x}_0</math>, <math>\mathbf{x}_1</math>, and <math>\mathbf{x}_2</math>) is invertible, its inverse is given by : <math>\mathbf{A}^{-1} = \frac{1}{\det(\mathbf A)}\begin{bmatrix} {(\mathbf{x}_1\times\mathbf{x}_2)}^\mathrm{T} \\ {(\mathbf{x}_2\times\mathbf{x}_0)}^\mathrm{T} \\ {(\mathbf{x}_0\times\mathbf{x}_1)}^\mathrm{T} \end{bmatrix}.</math> The determinant of {{math|'''A'''}}, {{math|det('''A''')}}, is equal to the triple product of {{math|'''x'''{{sub|0}}}}, {{math|'''x'''{{sub|1}}}}, and {{math|'''x'''{{sub|2}}}}—the volume of the [[parallelepiped]] formed by the rows or columns: : <math>\det(\mathbf{A}) = \mathbf{x}_0\cdot(\mathbf{x}_1\times\mathbf{x}_2).</math> The correctness of the formula can be checked by using cross- and triple-product properties and by noting that for groups, left and right inverses always coincide. Intuitively, because of the cross products, each row of {{math|'''A'''{{sup|–1}}}} is orthogonal to the non-corresponding two columns of {{math|'''A'''}} (causing the off-diagonal terms of <math>\mathbf{I} = \mathbf{A}^{-1}\mathbf{A}</math> be zero). Dividing by : <math>\det(\mathbf{A}) = \mathbf{x}_0\cdot(\mathbf{x}_1\times\mathbf{x}_2)</math> causes the diagonal entries of {{math|1='''I''' = '''A'''{{sup|−1}}'''A'''}} to be unity. For example, the first diagonal is: : <math>1 = \frac{1}{\mathbf{x_0}\cdot(\mathbf{x}_1\times\mathbf{x}_2)} \mathbf{x_0}\cdot(\mathbf{x}_1\times\mathbf{x}_2).</math> ==== Inversion of 4 × 4 matrices ==== With increasing dimension, expressions for the inverse of {{math|'''A'''}} get complicated. For {{math|1=''n'' = 4}}, the Cayley–Hamilton method leads to an expression that is still tractable: :<math>\begin{align} \mathbf{A}^{-1} = \frac{1}{\det(\mathbf{A})}\Bigl( &\tfrac{1}{6}\bigl( (\operatorname{tr}\mathbf{A})^{3} - 3\operatorname{tr}\mathbf{A}\operatorname{tr}(\mathbf{A}^{2}) + 2\operatorname{tr}(\mathbf{A}^{3})\bigr) \mathbf{I} \\[-3mu] &\ \ \ - \tfrac{1}{2}\mathbf{A}\bigl((\operatorname{tr}\mathbf{A})^{2} - \operatorname{tr}(\mathbf{A}^{2})\bigr) + \mathbf{A}^{2}\operatorname{tr}\mathbf{A} - \mathbf{A}^{3} \Bigr). \end{align}</math>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)