Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Minor (linear algebra)
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Applications of minors and cofactors== ===Cofactor expansion of the determinant=== {{main|Laplace expansion}} The cofactors feature prominently in [[Laplace expansion|Laplace's formula]] for the expansion of determinants, which is a method of computing larger determinants in terms of smaller ones. Given an {{math|''n'' × ''n''}} matrix {{math|1='''A''' = (''a{{sub|ij}}'')}}, the determinant of {{math|'''A'''}}, denoted {{math|det('''A''')}}, can be written as the sum of the cofactors of any row or column of the matrix multiplied by the entries that generated them. In other words, defining <math>C_{ij} = (-1)^{i+j} M_{ij}</math> then the cofactor expansion along the {{mvar|j}}-th column gives: <math display=block>\begin{align} \det(\mathbf A) &= a_{1j}C_{1j} + a_{2j}C_{2j} + a_{3j}C_{3j} + \cdots + a_{nj}C_{nj} \\[2pt] &= \sum_{i=1}^{n} a_{ij} C_{ij} \\[2pt] &= \sum_{i=1}^{n} a_{ij}(-1)^{i+j} M_{ij} \end{align}</math> The cofactor expansion along the {{mvar|i}}-th row gives: <math display=block>\begin{align} \det(\mathbf A) &= a_{i1}C_{i1} + a_{i2}C_{i2} + a_{i3}C_{i3} + \cdots + a_{in}C_{in} \\[2pt] &= \sum_{j=1}^{n} a_{ij} C_{ij} \\[2pt] &= \sum_{j=1}^{n} a_{ij} (-1)^{i+j} M_{ij} \end{align}</math> ===Inverse of a matrix=== {{main|Invertible matrix}} One can write down the inverse of an [[invertible matrix]] by computing its cofactors by using [[Cramer's rule]], as follows. The matrix formed by all of the cofactors of a square matrix {{math|'''A'''}} is called the '''cofactor matrix''' (also called the '''matrix of cofactors''' or, sometimes, ''comatrix''): <math display=block>\mathbf C = \begin{bmatrix} C_{11} & C_{12} & \cdots & C_{1n} \\ C_{21} & C_{22} & \cdots & C_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ C_{n1} & C_{n2} & \cdots & C_{nn} \end{bmatrix} </math> Then the inverse of {{math|'''A'''}} is the transpose of the cofactor matrix times the reciprocal of the determinant of {{math|'''A'''}}: <math display=block>\mathbf A^{-1} = \frac{1}{\operatorname{det}(\mathbf A)} \mathbf C^\mathsf{T}.</math> The transpose of the cofactor matrix is called the [[adjugate]] matrix (also called the ''classical adjoint'') of {{math|'''A'''}}. The above formula can be generalized as follows: Let <math display=block>\begin{align} I &= 1 \le i_1 < i_2 < \ldots < i_k \le n, \\[2pt] J &= 1 \le j_1 < j_2 < \ldots < j_k \le n, \end{align}</math> be ordered sequences (in natural order) of indexes (here {{math|'''A'''}} is an {{math|''n'' × ''n''}} matrix). Then<ref name="Prasolov1994">{{cite book|author=Viktor Vasil_evich Prasolov|title=Problems and Theorems in Linear Algebra|url=https://books.google.com/books?id=b4yKAwAAQBAJ&pg=PR15|date=13 June 1994|publisher=American Mathematical Soc.|isbn=978-0-8218-0236-6|pages=15–}}</ref> <math display=block>[\mathbf A^{-1}]_{I,J} = \pm\frac{[\mathbf A]_{J',I'}}{\det \mathbf A},</math> where {{math|''I′'', ''J′''}} denote the ordered sequences of indices (the indices are in natural order of magnitude, as above) complementary to {{math|''I'', ''J''}}, so that every index {{math|1, ..., ''n''}} appears exactly once in either {{mvar|I}} or {{mvar|I'}}, but not in both (similarly for the {{mvar|J}} and {{mvar|J'}}) and {{math|['''A''']<sub>''I'', ''J''</sub>}} denotes the determinant of the submatrix of {{math|'''A'''}} formed by choosing the rows of the index set {{mvar|I}} and columns of index set {{mvar|J}}. Also, <math>[\mathbf A]_{I,J} = \det \bigl( (A_{i_p, j_q})_{p,q = 1, \ldots, k} \bigr).</math> A simple proof can be given using wedge product. Indeed, <math display=block>\bigl[ \mathbf A^{-1} \bigr]_{I,J} (e_1\wedge\ldots \wedge e_n) = \pm(\mathbf A^{-1}e_{j_1})\wedge \ldots \wedge(\mathbf A^{-1}e_{j_k})\wedge e_{i'_1}\wedge\ldots \wedge e_{i'_{n-k}}, </math> where <math>e_1, \ldots, e_n</math> are the basis vectors. Acting by {{math|'''A'''}} on both sides, one gets <math display=block>\begin{align} &\ \bigl[\mathbf A^{-1} \bigr]_{I,J} \det \mathbf A (e_1\wedge\ldots \wedge e_n) \\[2pt] =&\ \pm (e_{j_1})\wedge \ldots \wedge(e_{j_k})\wedge (\mathbf A e_{i'_1})\wedge\ldots \wedge (\mathbf A e_{i'_{n-k}}) \\[2pt] =&\ \pm [\mathbf A]_{J',I'}(e_1\wedge\ldots \wedge e_n). \end{align}</math> The sign can be worked out to be <math display=block>(-1)^\wedge \!\!\left( \sum_{s=1}^{k} i_s - \sum_{s=1}^{k} j_s \right),</math> so the sign is determined by the sums of elements in {{mvar|I}} and {{mvar|J}}. ===Other applications=== Given an {{math|''m'' × ''n''}} matrix with [[real number|real]] entries (or entries from any other [[field (mathematics)|field]]) and [[rank (matrix theory)|rank]] {{mvar|r}}, then there exists at least one non-zero {{math|''r'' × ''r''}} minor, while all larger minors are zero. We will use the following notation for minors: if {{math|'''A'''}} is an {{math|''m'' × ''n''}} matrix, {{mvar|I}} is a [[subset]] of {{math|{1, ..., ''m''} }} with {{mvar|k}} elements, and {{mvar|J}} is a subset of {{math|{1, ..., ''n''} }} with {{mvar|k}} elements, then we write {{math|['''A''']<sub>''I'', ''J''</sub>}} for the {{math|''k'' × ''k''}} minor of {{math|'''A'''}} that corresponds to the rows with index in {{mvar|I}} and the columns with index in {{mvar|J}}. * If {{math|1=''I'' = ''J''}}, then {{math|['''A''']<sub>''I'', ''J''</sub>}} is called a ''principal minor''. * If the matrix that corresponds to a principal minor is a square upper-left [[Matrix (mathematics)#Submatrix|submatrix]] of the larger matrix (i.e., it consists of matrix elements in rows and columns from 1 to {{mvar|k}}, also known as a leading principal submatrix), then the principal minor is called a ''leading principal minor (of order {{mvar|k}})'' or ''corner (principal) minor (of order {{mvar|k}})''.<ref name="Encyclopedia of Mathematics">{{cite book |chapter=Minor |title=Encyclopedia of Mathematics |url=http://www.encyclopediaofmath.org/index.php?title=Minor&oldid=30176 }}</ref> For an {{math|''n'' × ''n''}} square matrix, there are {{mvar|n}} leading principal minors. * A ''basic minor'' of a matrix is the determinant of a square submatrix that is of maximal size with nonzero determinant.<ref name="Encyclopedia of Mathematics" /> * For [[Hermitian matrix|Hermitian matrices]], the leading principal minors can be used to test for [[positive-definite matrix|positive definiteness]] and the principal minors can be used to test for [[positive-semidefinite matrix|positive semidefiniteness]]. See [[Sylvester's criterion]] for more details. Both the formula for ordinary [[matrix multiplication]] and the [[Cauchy–Binet formula]] for the determinant of the product of two matrices are special cases of the following general statement about the minors of a product of two matrices. Suppose that {{math|'''A'''}} is an {{math|''m'' × ''n''}} matrix, {{math|'''B'''}} is an {{math|''n'' × ''p''}} matrix, {{mvar|I}} is a [[subset]] of {{math|{1, ..., ''m''} }} with {{mvar|k}} elements and {{mvar|J}} is a subset of {{math|{1, ..., ''p''} }} with {{mvar|k}} elements. Then <math display=block>[\mathbf{AB}]_{I,J} = \sum_{K} [\mathbf{A}]_{I,K} [\mathbf{B}]_{K,J}\,</math> where the sum extends over all subsets {{mvar|K}} of {{math|{1, ..., ''n''} }} with {{mvar|k}} elements. This formula is a straightforward extension of the Cauchy–Binet formula.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)