Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Block matrix
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Block matrix operations== ===Transpose=== Let :<math>A = \begin{bmatrix} A_{11} & A_{12} & \cdots & A_{1q} \\ A_{21} & A_{22} & \cdots & A_{2q} \\ \vdots & \vdots & \ddots & \vdots \\ A_{p1} & A_{p2} & \cdots & A_{pq} \end{bmatrix}</math> where <math>A_{ij} \in \mathbb{C}^{k_i \times \ell_j}</math>. (This matrix <math>A</math> will be reused in {{section link||Addition}} and {{section link||Multiplication}}.) Then its transpose is :<math>A^T = \begin{bmatrix} A_{11}^T & A_{21}^T & \cdots & A_{p1}^T \\ A_{12}^T & A_{22}^T & \cdots & A_{p2}^T \\ \vdots & \vdots & \ddots & \vdots \\ A_{1q}^T & A_{2q}^T & \cdots & A_{pq}^T \end{bmatrix}</math>,<ref name=":2" /><ref name=":1" /> and the same equation holds with the transpose replaced by the conjugate transpose.<ref name=":2" /> ====Block transpose==== A special form of matrix [[transpose]] can also be defined for block matrices, where individual blocks are reordered but not transposed. Let <math>A=(B_{ij})</math> be a <math>k \times l</math> block matrix with <math>m \times n</math> blocks <math>B_{ij}</math>, the block transpose of <math>A</math> is the <math>l \times k</math> block matrix <math>A^\mathcal{B}</math> with <math>m \times n</math> blocks <math>\left(A^\mathcal{B}\right)_{ij} = B_{ji}</math>.<ref>{{cite thesis |last=Mackey |first=D. Steven |date=2006 |title=Structured linearizations for matrix polynomials |publisher=University of Manchester |issn=1749-9097 |oclc=930686781 |url=http://eprints.maths.manchester.ac.uk/314/1/mackey06.pdf}}</ref> As with the conventional trace operator, the block transpose is a [[linear mapping]] such that <math>(A + C)^\mathcal{B} = A^\mathcal{B} + C^\mathcal{B} </math>.<ref name=":1" /> However, in general the property <math>(A C)^\mathcal{B} = C^\mathcal{B} A^\mathcal{B} </math> does not hold unless the blocks of <math>A</math> and <math>C</math> commute. ===Addition=== Let :<math>B = \begin{bmatrix} B_{11} & B_{12} & \cdots & B_{1s} \\ B_{21} & B_{22} & \cdots & B_{2s} \\ \vdots & \vdots & \ddots & \vdots \\ B_{r1} & B_{r2} & \cdots & B_{rs} \end{bmatrix}</math>, where <math>B_{ij} \in \mathbb{C}^{m_i \times n_j}</math>, and let <math>A</math> be the matrix defined in {{section link||Transpose}}. (This matrix <math>B</math> will be reused in {{section link||Multiplication}}.) Then if <math>p = r</math>, <math>q = s</math>, <math>k_i = m_i</math>, and <math>\ell_j = n_j</math>, then :<math>A + B = \begin{bmatrix} A_{11} + B_{11} & A_{12} + B_{12} & \cdots & A_{1q} + B_{1q} \\ A_{21} + B_{21} & A_{22} + B_{22} & \cdots & A_{2q} + B_{2q} \\ \vdots & \vdots & \ddots & \vdots \\ A_{p1} + B_{p1} & A_{p2} + B_{p2} & \cdots & A_{pq} + B_{pq} \end{bmatrix}</math>.<ref name=":2" /> ===Multiplication=== It is possible to use a block partitioned matrix product that involves only algebra on submatrices of the factors. The partitioning of the factors is not arbitrary, however, and requires "[[Conformable matrix|conformable]] partitions"<ref>{{cite book |last=Eves |first=Howard |author-link=Howard Eves |title=Elementary Matrix Theory |year=1980 |publisher=Dover |location=New York |isbn=0-486-63946-0 |page=[https://archive.org/details/elementarymatrix0000eves_r2m2/page/37 37] |url=https://archive.org/details/elementarymatrix0000eves_r2m2 |url-access=registration |edition=reprint |access-date=24 April 2013 |quote=A partitioning as in Theorem 1.9.4 is called a ''conformable partition'' of ''A'' and ''B''.}}</ref> between two matrices <math>A</math> and <math>B</math> such that all submatrix products that will be used are defined.<ref>{{cite book |last=Anton |first=Howard |title=Elementary Linear Algebra |year=1994 |publisher=John Wiley |location=New York |isbn=0-471-58742-7 |page=36 |edition=7th |quote=...provided the sizes of the submatrices of A and B are such that the indicated operations can be performed.}}</ref> {{Cquote | quote = Two matrices <math>A</math> and <math>B</math> are said to be partitioned conformally for the product <math>AB</math>, when <math>A</math> and <math>B</math> are partitioned into submatrices and if the multiplication <math>AB</math> is carried out treating the submatrices as if they are scalars, but keeping the order, and when all products and sums of submatrices involved are defined. | author = Arak M. Mathai and Hans J. Haubold | source = ''Linear Algebra: A Course for Physicists and Engineers''<ref>{{Cite book |last1=Mathai |first1=Arakaparampil M. |title=Linear Algebra: a course for physicists and engineers |last2=Haubold |first2=Hans J. |date=2017 |publisher=De Gruyter |isbn=978-3-11-056259-0 |series=De Gruyter textbook |location=Berlin Boston |pages=162}}</ref> }} Let <math>A</math> be the matrix defined in {{section link||Transpose}}, and let <math>B</math> be the matrix defined in {{section link||Addition}}. Then the matrix product :<math> C = AB </math> can be performed blockwise, yielding <math>C</math> as an <math>(p \times s)</math> matrix. The matrices in the resulting matrix <math>C</math> are calculated by multiplying: :<math> C_{ij} = \sum_{k=1}^{q} A_{ik}B_{kj}. </math><ref name=":3">{{Cite book |last=Johnston |first=Nathaniel |title=Introduction to linear and matrix algebra |date=2021 |publisher=Springer Nature |isbn=978-3-030-52811-9 |location=Cham, Switzerland |pages=30,425}}</ref> Or, using the [[Einstein notation]] that implicitly sums over repeated indices: :<math> C_{ij} = A_{ik}B_{kj}. </math> Depicting <math>C</math> as a matrix, we have :<math>C = AB = \begin{bmatrix} \sum_{i=1}^{q} A_{1i}B_{i1} & \sum_{i=1}^{q} A_{1i}B_{i2} & \cdots & \sum_{i=1}^{q} A_{1i}B_{is} \\ \sum_{i=1}^{q} A_{2i}B_{i1} & \sum_{i=1}^{q} A_{2i}B_{i2} & \cdots & \sum_{i=1}^{q} A_{2i}B_{is} \\ \vdots & \vdots & \ddots & \vdots \\ \sum_{i=1}^{q} A_{pi}B_{i1} & \sum_{i=1}^{q} A_{pi}B_{i2} & \cdots & \sum_{i=1}^{q} A_{pi}B_{is} \end{bmatrix}</math>.<ref name=":2" /> ===Inversion{{anchor|Inversion}}=== {{for|more details and derivation using block LDU decomposition|Schur complement}} {{see also|Helmert–Wolf blocking}} If a matrix is partitioned into four blocks, it can be [[invertible matrix#Blockwise inversion|inverted blockwise]] as follows: :<math>{P} = \begin{bmatrix} {A} & {B} \\ {C} & {D} \end{bmatrix}^{-1} = \begin{bmatrix} {A}^{-1} + {A}^{-1}{B}\left({D} - {CA}^{-1}{B}\right)^{-1}{CA}^{-1} & -{A}^{-1}{B}\left({D} - {CA}^{-1}{B}\right)^{-1} \\ -\left({D}-{CA}^{-1}{B}\right)^{-1}{CA}^{-1} & \left({D} - {CA}^{-1}{B}\right)^{-1} \end{bmatrix}, </math> where '''A''' and '''D''' are square blocks of arbitrary size, and '''B''' and '''C''' are [[conformable matrix|conformable]] with them for partitioning. Furthermore, '''A''' and the Schur complement of '''A''' in '''P''': {{nowrap|'''P'''/'''A''' {{=}} '''D''' − '''CA'''{{sup|−1}}'''B'''}} must be invertible.<ref> {{cite book | last = Bernstein | first = Dennis | title = Matrix Mathematics | publisher = Princeton University Press | year = 2005 | pages = 44 | isbn = 0-691-11802-7 }}</ref> Equivalently, by permuting the blocks: :<math>{P} = \begin{bmatrix} {A} & {B} \\ {C} & {D} \end{bmatrix}^{-1} = \begin{bmatrix} \left({A} - {BD}^{-1}{C}\right)^{-1} & -\left({A}-{BD}^{-1}{C}\right)^{-1}{BD}^{-1} \\ -{D}^{-1}{C}\left({A} - {BD}^{-1}{C}\right)^{-1} & \quad {D}^{-1} + {D}^{-1}{C}\left({A} - {BD}^{-1}{C}\right)^{-1}{BD}^{-1} \end{bmatrix}. </math><ref name=":0" /> Here, '''D''' and the Schur complement of '''D''' in '''P''': {{nowrap|'''P'''/'''D''' {{=}} '''A''' − '''BD'''{{sup|−1}}'''C'''}} must be invertible. If '''A''' and '''D''' are both invertible, then: : <math> \begin{bmatrix} {A} & {B} \\ {C} & {D} \end{bmatrix}^{-1} = \begin{bmatrix} \left({A} - {B} {D}^{-1} {C}\right)^{-1} & {0} \\ {0} & \left({D} - {C} {A}^{-1} {B}\right)^{-1} \end{bmatrix} \begin{bmatrix} {I} & -{B} {D}^{-1} \\ -{C} {A}^{-1} & {I} \end{bmatrix}. </math> By the [[Weinstein–Aronszajn identity]], one of the two matrices in the block-diagonal matrix is invertible exactly when the other is. ====Computing submatrix inverses from the full inverse==== By the symmetry between a matrix and its inverse in the block inversion formula, if a matrix '''P''' and its inverse '''P'''<sup>−1</sup> are partitioned conformally: :<math>P = \begin{bmatrix} {A} & {B} \\ {C} & {D} \end{bmatrix}, \quad P^{-1} = \begin{bmatrix} {E} & {F} \\ {G} & {H} \end{bmatrix}</math> then the inverse of any principal submatrix can be computed from the corresponding blocks of '''P'''<sup>−1</sup>: :<math>{A}^{-1} = {E} - {FH}^{-1}{G}</math> :<math>{D}^{-1} = {H} - {GE}^{-1}{F}</math> This relationship follows from recognizing that '''E'''<sup>−1</sup> = '''A''' − '''BD'''<sup>−1</sup>'''C''' (the Schur complement), and applying the same block inversion formula with the roles of '''P''' and '''P'''<sup>−1</sup> reversed.<ref>{{cite web|title=Is this formula for a matrix block inverse in terms of the entire matrix inverse known?|url=https://mathoverflow.net/questions/495299/is-this-formula-for-a-matrix-block-inverse-in-terms-of-the-entire-matrix-inverse|website=MathOverflow}}</ref> <ref>{{cite journal|last1=Escalante-B.|first1=Alberto N.|last2=Wiskott|first2=Laurenz|title=Improved graph-based SFA: Information preservation complements the slowness principle|journal=Machine Learning|year=2016|volume=|issue=|pages=|doi=10.1007/s10994-016-5563-y|url=https://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000200251}}</ref> ===Determinant{{anchor|Determinant}}=== The formula for the determinant of a <math>2 \times 2</math>-matrix above continues to hold, under appropriate further assumptions, for a matrix composed of four submatrices <math>A, B, C, D</math> with <math>A</math> and <math>D</math> square. The easiest such formula, which can be proven using either the [[Leibniz formula for determinants|Leibniz formula]] or a factorization involving the [[Schur complement]], is :<math>\det\begin{bmatrix}A& 0\\ C& D\end{bmatrix} = \det(A) \det(D) = \det\begin{bmatrix}A& B\\ 0& D\end{bmatrix}.</math><ref name=":0" /> Using this formula, we can derive that [[characteristic polynomial]]s of <math>\begin{bmatrix}A& 0\\ C& D\end{bmatrix}</math> and <math>\begin{bmatrix}A& B\\ 0& D\end{bmatrix}</math> are same and equal to the product of characteristic polynomials of <math>A</math> and <math>D</math>. Furthermore, If <math>\begin{bmatrix}A& 0\\ C& D\end{bmatrix}</math> or <math>\begin{bmatrix}A& B\\ 0& D\end{bmatrix}</math> is [[diagonalizable]], then <math>A</math> and <math>D</math> are diagonalizable too. The converse is false; simply check <math>\begin{bmatrix}1& 1\\ 0& 1\end{bmatrix}</math>. If <math>A</math> is [[Invertible matrix|invertible]], one has :<math>\det\begin{bmatrix}A& B\\ C& D\end{bmatrix} = \det(A) \det\left(D - C A^{-1} B\right),</math><ref name=":0" /> and if <math>D</math> is invertible, one has :<math>\det\begin{bmatrix}A& B\\ C& D\end{bmatrix} = \det(D) \det\left(A - B D^{-1} C\right) .</math><ref>Taboga, Marco (2021). "Determinant of a block matrix", Lectures on matrix algebra.</ref><ref name=":0" /> If the blocks are square matrices of the ''same'' size further formulas hold. For example, if <math>C</math> and <math>D</math> [[commutativity|commute]] (i.e., <math>CD=DC</math>), then :<math>\det\begin{bmatrix}A& B\\ C& D\end{bmatrix} = \det(AD - BC).</math><ref>{{Cite journal|first= J. R.|last= Silvester|title= Determinants of Block Matrices|journal= Math. Gaz.|volume= 84|issue= 501|year= 2000|pages= 460–467|jstor= 3620776|url= http://www.ee.iisc.ernet.in/new/people/faculty/prasantg/downloads/blocks.pdf|doi= 10.2307/3620776|access-date= 2021-06-25|archive-date= 2015-03-18|archive-url= https://web.archive.org/web/20150318222335/http://www.ee.iisc.ernet.in/new/people/faculty/prasantg/downloads/blocks.pdf|url-status= dead}}</ref> Similar statements hold when <math>AB=BA</math>, <math>AC=CA</math>, or {{tmath|1=BD=DB}}. Namely, if <math>AC=CA</math>, then :<math>\det\begin{bmatrix}A& B\\ C& D\end{bmatrix} = \det(AD - CB).</math> Note the change in order of <math>C</math> and <math>B</math> (we have <math>CB</math> instead of <math>BC</math>). Similarly, if <math>BD = DB</math>, then <math>AD</math> should be replaced with <math>DA</math> (i.e. we get <math>\det(DA - BC)</math>) and if <math>AB = BA</math>, then we should have <math>\det(DA - CB)</math>. Note for the last two results, you have to use commutativity of the underlying ring, but not for the first two. This formula has been generalized to matrices composed of more than <math>2 \times 2</math> blocks, again under appropriate commutativity conditions among the individual blocks.<ref>{{cite journal|last1=Sothanaphan|first1=Nat|title=Determinants of block matrices with noncommuting blocks|journal=Linear Algebra and Its Applications|date=January 2017|volume=512|pages=202–218|doi=10.1016/j.laa.2016.10.004|arxiv=1805.06027|s2cid=119272194}}</ref> For <math>A = D </math> and <math>B=C</math>, the following formula holds (even if <math>A</math> and <math>B</math> do not commute) :<math>\det\begin{bmatrix}A& B\\ B& A\end{bmatrix} = \det(A - B) \det(A + B).</math><ref name=":0" />
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)