Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Cholesky decomposition
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Computation == There are various methods for calculating the Cholesky decomposition. The computational complexity of commonly used algorithms is {{math|''O''(''n''<sup>3</sup>)}} in general.{{Citation needed|date=June 2011}} The algorithms described below all involve about {{math|(1/3)''n''<sup>3</sup>}} [[FLOP]]s ({{math|''n''<sup>3</sup>/6}} multiplications and the same number of additions) for real flavors and {{math|(4/3)''n''<sup>3</sup>}} [[FLOP]]s for complex flavors,<ref>?potrf Intel® Math Kernel Library [https://software.intel.com/content/www/us/en/develop/documentation/onemkl-developer-reference-c/top/lapack-routines/lapack-linear-equation-routines/lapack-linear-equation-computational-routines/matrix-factorization-lapack-computational-routines/potrf.html#potrf]</ref> where {{mvar|n}} is the size of the matrix {{math|'''A'''}}. Hence, they have half the cost of the [[LU decomposition]], which uses {{math|2''n''<sup>3</sup>/3}} FLOPs (see Trefethen and Bau 1997). Which of the algorithms below is faster depends on the details of the implementation. Generally, the first algorithm will be slightly slower because it accesses the data in a less regular manner. The Cholesky decomposition was shown to be numerically stable without need for pivoting.<ref>{{cite journal | last1 = Turing | first1 = A. M. | journal = Quart. J. Mech. Appl. Math. | pages = 287–308 | title = Rounding-off errors in matrix processes | volume = 1 | year = 1948| doi = 10.1093/qjmam/1.1.287 }}</ref> === The Cholesky algorithm === The '''Cholesky algorithm''', used to calculate the decomposition matrix {{math|'''L'''}}, is a modified version of [[Gaussian elimination]]. The recursive algorithm starts with {{math|1=''i'' := 1}} and :{{math|1='''A'''<sup>(1)</sup> := '''A'''}}. At step {{mvar|i}}, the matrix {{math|'''A'''<sup>(''i'')</sup>}} has the following form: <math display=block>\mathbf{A}^{(i)}= \begin{pmatrix} \mathbf{I}_{i-1} & 0 & 0 \\ 0 & a_{i,i} & \mathbf{b}_{i}^{*} \\ 0 & \mathbf{b}_{i} & \mathbf{B}^{(i)} \end{pmatrix}, </math> where {{math|'''I'''<sub>''i''−1</sub>}} denotes the [[identity matrix]] of dimension {{math|''i'' − 1}}. If the matrix {{math|'''L'''<sub>''i''</sub>}} is defined by <math display=block>\mathbf{L}_{i}:= \begin{pmatrix} \mathbf{I}_{i-1} & 0 & 0 \\ 0 & \sqrt{a_{i,i}} & 0 \\ 0 & \frac{1}{\sqrt{a_{i,i}}} \mathbf{b}_{i} & \mathbf{I}_{n-i} \end{pmatrix}, </math> (note that {{math|''a''<sub>''i,i''</sub> > 0}} since {{math|'''A'''<sup>(''i'')</sup>}} is positive definite), then {{math|'''A'''<sup>(''i'')</sup>}} can be written as <math display=block>\mathbf{A}^{(i)} = \mathbf{L}_{i} \mathbf{A}^{(i+1)} \mathbf{L}_{i}^{*}</math> where <math display=block>\mathbf{A}^{(i+1)}= \begin{pmatrix} \mathbf{I}_{i-1} & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & \mathbf{B}^{(i)} - \frac{1}{a_{i,i}} \mathbf{b}_{i} \mathbf{b}_{i}^{*} \end{pmatrix}.</math> Note that {{math|'''b'''<sub>''i''</sub> '''b'''<sub>''i''</sub>*}} is an [[outer product]], therefore this algorithm is called the ''outer-product version'' in (Golub & Van Loan). This is repeated for {{mvar|i}} from 1 to {{mvar|n}}. After {{mvar|n}} steps, {{math|1='''A'''<sup>(''n''+1)</sup> = '''I'''}} is obtained, and hence, the lower triangular matrix {{mvar|L}} sought for is calculated as <math display=block>\mathbf{L} := \mathbf{L}_{1} \mathbf{L}_{2} \dots \mathbf{L}_{n}.</math> === The Cholesky–Banachiewicz and Cholesky–Crout algorithms === [[File:Chol.gif|thumb|Access pattern (white) and writing pattern (yellow) for the in-place Cholesky—Banachiewicz algorithm on a 5×5 matrix]] If the equation <math display=block>\begin{align} \mathbf{A} = \mathbf{LL}^T & = \begin{pmatrix} L_{11} & 0 & 0 \\ L_{21} & L_{22} & 0 \\ L_{31} & L_{32} & L_{33}\\ \end{pmatrix} \begin{pmatrix} L_{11} & L_{21} & L_{31} \\ 0 & L_{22} & L_{32} \\ 0 & 0 & L_{33} \end{pmatrix} \\[8pt] & = \begin{pmatrix} L_{11}^2 & &(\text{symmetric}) \\ L_{21}L_{11} & L_{21}^2 + L_{22}^2& \\ L_{31}L_{11} & L_{31}L_{21}+L_{32}L_{22} & L_{31}^2 + L_{32}^2+L_{33}^2 \end{pmatrix}, \end{align}</math> is written out, the following is obtained: <math display=block>\begin{align} \mathbf{L} = \begin{pmatrix} \sqrt{A_{11}} & 0 & 0 \\ A_{21}/L_{11} & \sqrt{A_{22} - L_{21}^2} & 0 \\ A_{31}/L_{11} & \left( A_{32} - L_{31}L_{21} \right) /L_{22} &\sqrt{A_{33}- L_{31}^2 - L_{32}^2} \end{pmatrix} \end{align}</math> and therefore the following formulas for the entries of {{math|'''L'''}}: <math display=block> L_{j,j} = (\pm)\sqrt{ A_{j,j} - \sum_{k=1}^{j-1} L_{j,k}^2 }, </math> <math display=block> L_{i,j} = \frac{1}{L_{j,j}} \left( A_{i,j} - \sum_{k=1}^{j-1} L_{i,k} L_{j,k} \right) \quad \text{for } i>j. </math> For complex and real matrices, inconsequential arbitrary sign changes of diagonal and associated off-diagonal elements are allowed. The expression under the [[square root]] is always positive if {{math|'''A'''}} is real and positive-definite. For complex Hermitian matrix, the following formula applies: <math display=block> L_{j,j} = \sqrt{ A_{j,j} - \sum_{k=1}^{j-1} L_{j,k}^*L_{j,k} }, </math> <math display=block> L_{i,j} = \frac{1}{L_{j,j}} \left( A_{i,j} - \sum_{k=1}^{j-1} L_{j,k}^* L_{i,k} \right) \quad \text{for } i>j. </math> So it now is possible to compute the {{math|(''i'', ''j'')}} entry if the entries to the left and above are known. The computation is usually arranged in either of the following orders: * The '''Cholesky–Banachiewicz algorithm''' starts from the upper left corner of the matrix {{mvar|L}} and proceeds to calculate the matrix row by row. <syntaxhighlight lang="C"> for (i = 0; i < dimensionSize; i++) { for (j = 0; j <= i; j++) { float sum = 0; for (k = 0; k < j; k++) sum += L[i][k] * L[j][k]; if (i == j) L[i][j] = sqrt(A[i][i] - sum); else L[i][j] = (1.0 / L[j][j] * (A[i][j] - sum)); } } </syntaxhighlight> The above algorithm can be succinctly expressed as combining a [[dot product]] and [[matrix multiplication]] in vectorized programming languages such as [[Fortran]] as the following, <syntaxhighlight lang="Fortran"> do i = 1, size(A,1) L(i,i) = sqrt(A(i,i) - dot_product(L(i,1:i-1), L(i,1:i-1))) L(i+1:,i) = (A(i+1:,i) - matmul(conjg(L(i,1:i-1)), L(i+1:,1:i-1))) / L(i,i) end do </syntaxhighlight> where <code>conjg</code> refers to complex conjugate of the elements. * The '''Cholesky–Crout algorithm''' starts from the upper left corner of the matrix {{mvar|L}} and proceeds to calculate the matrix column by column. <syntaxhighlight lang="C"> for (j = 0; j < dimensionSize; j++) { float sum = 0; for (k = 0; k < j; k++) { sum += L[j][k] * L[j][k]; } L[j][j] = sqrt(A[j][j] - sum); for (i = j + 1; i < dimensionSize; i++) { sum = 0; for (k = 0; k < j; k++) { sum += L[i][k] * L[j][k]; } L[i][j] = (1.0 / L[j][j] * (A[i][j] - sum)); } } </syntaxhighlight> The above algorithm can be succinctly expressed as combining a [[dot product]] and [[matrix multiplication]] in vectorized programming languages such as [[Fortran]] as the following, <syntaxhighlight lang="Fortran"> do i = 1, size(A,1) L(i,i) = sqrt(A(i,i) - dot_product(L(1:i-1,i), L(1:i-1,i))) L(i,i+1:) = (A(i,i+1:) - matmul(conjg(L(1:i-1,i)), L(1:i-1,i+1:))) / L(i,i) end do </syntaxhighlight> where <code>conjg</code> refers to complex conjugate of the elements. Either pattern of access allows the entire computation to be performed in-place if desired. === Stability of the computation === Suppose that there is a desire to solve a [[condition number|well-conditioned]] system of linear equations. If the LU decomposition is used, then the algorithm is unstable unless some sort of pivoting strategy is used. In the latter case, the error depends on the so-called growth factor of the matrix, which is usually (but not always) small. Now, suppose that the Cholesky decomposition is applicable. As mentioned above, the algorithm will be twice as fast. Furthermore, no [[Pivot element|pivoting]] is necessary, and the error will always be small. Specifically, if {{math|1='''Ax''' = '''b'''}}, and {{math|'''y'''}} denotes the computed solution, then {{math|'''y'''}} solves the perturbed system ({{math|1='''A''' + '''E''')'''y''' = '''b'''}}, where <math display=block> \|\mathbf{E}\|_2 \le c_n \varepsilon \|\mathbf{A}\|_2. </math> Here ||·||<sub>2</sub> is the [[matrix norm|matrix 2-norm]], ''c<sub>n</sub>'' is a small constant depending on {{mvar|n}}, and {{mvar|ε}} denotes the [[unit round-off]]. One concern with the Cholesky decomposition to be aware of is the use of square roots. If the matrix being factorized is positive definite as required, the numbers under the square roots are always positive ''in exact arithmetic''. Unfortunately, the numbers can become negative because of [[round-off error]]s, in which case the algorithm cannot continue. However, this can only happen if the matrix is very ill-conditioned. One way to address this is to add a diagonal correction matrix to the matrix being decomposed in an attempt to promote the positive-definiteness.<ref>{{cite journal | last1 = Fang | first1 = Haw-ren | last2 = O'Leary | first2 = Dianne P. | author2-link = Dianne P. O'Leary | doi = 10.1007/s10107-007-0177-6 | issue = 2 | journal = Mathematical Programming | mr = 2411401 | pages = 319–349 | title = Modified Cholesky algorithms: a catalog with new approaches | url = https://www.cs.umd.edu/~oleary/tr/tr4807.pdf | volume = 115 | year = 2008| hdl = 1903/3674 }}</ref> While this might lessen the accuracy of the decomposition, it can be very favorable for other reasons; for example, when performing [[Newton's method in optimization]], adding a diagonal matrix can improve stability when far from the optimum. === LDL decomposition === An alternative form, eliminating the need to take square roots when {{math|'''A'''}} is symmetric, is the symmetric indefinite factorization<ref>{{cite book |first=D. |last=Watkins |year=1991 |title=Fundamentals of Matrix Computations |url=https://archive.org/details/fundamentalsofma0000watk |url-access=registration |location=New York |publisher=Wiley |page=[https://archive.org/details/fundamentalsofma0000watk/page/84 84] |isbn=0-471-61414-9 }}</ref> <math display=block> \begin{align} \mathbf{A} = \mathbf{LDL}^\mathrm{T} & = \begin{pmatrix} 1 & 0 & 0 \\ L_{21} & 1 & 0 \\ L_{31} & L_{32} & 1\\ \end{pmatrix} \begin{pmatrix} D_1 & 0 & 0 \\ 0 & D_2 & 0 \\ 0 & 0 & D_3\\ \end{pmatrix} \begin{pmatrix} 1 & L_{21} & L_{31} \\ 0 & 1 & L_{32} \\ 0 & 0 & 1\\ \end{pmatrix} \\[8pt] & = \begin{pmatrix} D_1 & &(\mathrm{symmetric}) \\ L_{21}D_1 & L_{21}^2D_1 + D_2& \\ L_{31}D_1 & L_{31}L_{21}D_{1}+L_{32}D_2 & L_{31}^2D_1 + L_{32}^2D_2+D_3. \end{pmatrix}. \end{align} </math> The following recursive relations apply for the entries of {{math|'''D'''}} and {{math|'''L'''}}: <math display=block> D_j = A_{jj} - \sum_{k=1}^{j-1} L_{jk}^2 D_k, </math> <math display=block> L_{ij} = \frac{1}{D_j} \left( A_{ij} - \sum_{k=1}^{j-1} L_{ik} L_{jk} D_k \right) \quad \text{for } i>j. </math> This works as long as the generated diagonal elements in {{math|'''D'''}} stay non-zero. The decomposition is then unique. {{math|'''D'''}} and {{math|'''L'''}} are real if {{math|'''A'''}} is real. For complex Hermitian matrix {{math|'''A'''}}, the following formula applies: <math display=block> D_{j} = A_{jj} - \sum_{k=1}^{j-1} L_{jk}L_{jk}^* D_k, </math> <math display=block> L_{ij} = \frac{1}{D_j} \left( A_{ij} - \sum_{k=1}^{j-1} L_{ik} L_{jk}^* D_k \right) \quad \text{for } i>j. </math> Again, the pattern of access allows the entire computation to be performed in-place if desired. ===Block variant=== When used on indefinite matrices, the '''LDL'''* factorization is known to be unstable without careful pivoting;<ref>{{cite book|last=Nocedal|first=Jorge|title=Numerical Optimization|year=2000|publisher=Springer}}</ref> specifically, the elements of the factorization can grow arbitrarily. A possible improvement is to perform the factorization on block sub-matrices, commonly 2 × 2:<ref>{{cite journal | last = Fang | first = Haw-Ren | doi = 10.1093/imanum/drp053 | issue = 2 | journal = IMA Journal of Numerical Analysis | mr = 2813183 | pages = 528–555 | title = Stability analysis of block <math>LDL^T</math> factorization for symmetric indefinite matrices | volume = 31 | year = 2011}}</ref> <math display=block>\begin{align} \mathbf{A} = \mathbf{LDL}^\mathrm{T} & = \begin{pmatrix} \mathbf I & 0 & 0 \\ \mathbf L_{21} & \mathbf I & 0 \\ \mathbf L_{31} & \mathbf L_{32} & \mathbf I\\ \end{pmatrix} \begin{pmatrix} \mathbf D_1 & 0 & 0 \\ 0 & \mathbf D_2 & 0 \\ 0 & 0 & \mathbf D_3\\ \end{pmatrix} \begin{pmatrix} \mathbf I & \mathbf L_{21}^\mathrm T & \mathbf L_{31}^\mathrm T \\ 0 & \mathbf I & \mathbf L_{32}^\mathrm T \\ 0 & 0 & \mathbf I\\ \end{pmatrix} \\[8pt] & = \begin{pmatrix} \mathbf D_1 & &(\mathrm{symmetric}) \\ \mathbf L_{21} \mathbf D_1 & \mathbf L_{21} \mathbf D_1 \mathbf L_{21}^\mathrm T + \mathbf D_2& \\ \mathbf L_{31} \mathbf D_1 & \mathbf L_{31} \mathbf D_{1} \mathbf L_{21}^\mathrm T + \mathbf L_{32} \mathbf D_2 & \mathbf L_{31} \mathbf D_1 \mathbf L_{31}^\mathrm T + \mathbf L_{32} \mathbf D_2 \mathbf L_{32}^\mathrm T + \mathbf D_3 \end{pmatrix}, \end{align} </math> where every element in the matrices above is a square submatrix. From this, these analogous recursive relations follow: <math display=block>\mathbf D_j = \mathbf A_{jj} - \sum_{k=1}^{j-1} \mathbf L_{jk} \mathbf D_k \mathbf L_{jk}^\mathrm T,</math> <math display=block>\mathbf L_{ij} = \left(\mathbf A_{ij} - \sum_{k=1}^{j-1} \mathbf L_{ik} \mathbf D_k \mathbf L_{jk}^\mathrm T\right) \mathbf D_j^{-1}.</math> This involves matrix products and explicit inversion, thus limiting the practical block size. ===Updating the decomposition=== A task that often arises in practice is that one needs to update a Cholesky decomposition. In more details, one has already computed the Cholesky decomposition <math display=inline>\mathbf{A} = \mathbf{L}\mathbf{L}^*</math> of some matrix <math display=inline>\mathbf{A}</math>, then one changes the matrix <math display=inline>\mathbf{A}</math> in some way into another matrix, say <math display=inline> \tilde{\mathbf{A}} </math>, and one wants to compute the Cholesky decomposition of the updated matrix: <math display=inline> \tilde{\mathbf{A}} = \tilde{\mathbf{L}} \tilde{\mathbf{L}}^* </math>. The question is now whether one can use the Cholesky decomposition of <math display=inline>\mathbf{A}</math> that was computed before to compute the Cholesky decomposition of <math display=inline> \tilde{\mathbf{A}} </math>. ==== Rank-one update ==== The specific case, where the updated matrix <math display=inline> \tilde{\mathbf{A}} </math> is related to the matrix <math display=inline>\mathbf{A}</math> by <math display=inline> \tilde{\mathbf{A}} = \mathbf{A} + \mathbf{x} \mathbf{x}^* </math>, is known as a ''rank-one update''. Here is a function<ref>Based on: {{cite book|last=Stewart|first=G. W.|title=Basic decompositions|year=1998|publisher=Soc. for Industrial and Applied Mathematics|location=Philadelphia|isbn=0-89871-414-1}}</ref> written in [[Matlab]] syntax that realizes a rank-one update: <syntaxhighlight lang="matlab"> function [L] = cholupdate(L, x) n = length(x); for k = 1:n r = sqrt(L(k, k)^2 + x(k)^2); c = r / L(k, k); s = x(k) / L(k, k); L(k, k) = r; if k < n L((k+1):n, k) = (L((k+1):n, k) + s * x((k+1):n)) / c; x((k+1):n) = c * x((k+1):n) - s * L((k+1):n, k); end end end </syntaxhighlight> A ''rank-n update'' is one where for a matrix <math display=inline>\mathbf{M}</math> one updates the decomposition such that <math display=inline> \tilde{\mathbf{A}} = \mathbf{A} + \mathbf{M} \mathbf{M}^* </math>. This can be achieved by successively performing rank-one updates for each of the columns of <math display=inline>\mathbf{M}</math>. ==== Rank-one downdate ==== A ''rank-one downdate'' is similar to a rank-one update, except that the addition is replaced by subtraction: <math display=inline> \tilde{\mathbf{A}} = \mathbf{A} - \mathbf{x} \mathbf{x}^* </math>. This only works if the new matrix <math display=inline> \tilde{\mathbf{A}} </math> is still positive definite. The code for the rank-one update shown above can easily be adapted to do a rank-one downdate: one merely needs to replace the two additions in the assignment to <code>r</code> and <code>L((k+1):n, k)</code> by subtractions. ==== Adding and removing rows and columns ==== If a symmetric and positive definite matrix <math display=inline> \mathbf A </math> is represented in block form as <math display=block> \mathbf{A} = \begin{pmatrix} \mathbf A_{11} & \mathbf A_{13} \\ \mathbf A_{13}^{\mathrm{T}} & \mathbf A_{33} \\ \end{pmatrix} </math> and its upper Cholesky factor <math display=block> \mathbf{L} = \begin{pmatrix} \mathbf L_{11} & \mathbf L_{13} \\ 0 & \mathbf L_{33} \\ \end{pmatrix}, </math> then for a new matrix <math display=inline> \tilde{\mathbf{A}} </math>, which is the same as <math display=inline> \mathbf A </math> but with the insertion of new rows and columns, <math display=block>\begin{align} \tilde{\mathbf{A}} &= \begin{pmatrix} \mathbf A_{11} & \mathbf A_{12} & \mathbf A_{13} \\ \mathbf A_{12}^{\mathrm{T}} & \mathbf A_{22} & \mathbf A_{23} \\ \mathbf A_{13}^{\mathrm{T}} & \mathbf A_{23}^{\mathrm{T}} & \mathbf A_{33} \\ \end{pmatrix} \end{align} </math> Now there is an interest in finding the Cholesky factorization of <math display=inline> \tilde{\mathbf{A}} </math>, which can be called <math display=inline> \tilde{\mathbf S} </math>, without directly computing the entire decomposition. <math display=block>\begin{align} \tilde{\mathbf{S}} &= \begin{pmatrix} \mathbf S_{11} & \mathbf S_{12} & \mathbf S_{13} \\ 0 & \mathbf S_{22} & \mathbf S_{23} \\ 0 & 0 & \mathbf S_{33} \\ \end{pmatrix}. \end{align} </math> Writing <math display=inline> \mathbf A \setminus \mathbf{b}</math> for the solution of <math display=inline> \mathbf A \mathbf x = \mathbf b</math>, which can be found easily for triangular matrices, and <math display=inline> \text{chol} (\mathbf M)</math> for the Cholesky decomposition of <math display=inline> \mathbf M </math>, the following relations can be found: <math display=block>\begin{align} \mathbf S_{11} &= \mathbf L_{11}, \\ \mathbf S_{12} &= \mathbf L_{11}^{\mathrm{T}} \setminus \mathbf A_{12}, \\ \mathbf S_{13} &= \mathbf L_{13}, \\ \mathbf S_{22} &= \mathrm{chol} \left(\mathbf A_{22} - \mathbf S_{12}^{\mathrm{T}} \mathbf S_{12}\right), \\ \mathbf S_{23} &= \mathbf S_{22}^{\mathrm{T}} \setminus \left(\mathbf A_{23} - \mathbf S_{12}^{\mathrm{T}} \mathbf S_{13}\right), \\ \mathbf S_{33} &= \mathrm{chol} \left(\mathbf L_{33}^{\mathrm{T}} \mathbf L_{33} - \mathbf S_{23}^{\mathrm{T}} \mathbf S_{23}\right). \end{align} </math> These formulas may be used to determine the Cholesky factor after the insertion of rows or columns in any position, if the row and column dimensions are appropriately set (including to zero). The inverse problem, <math display=block>\begin{align} \tilde{\mathbf{A}} &= \begin{pmatrix} \mathbf A_{11} & \mathbf A_{12} & \mathbf A_{13} \\ \mathbf A_{12}^{\mathrm{T}} & \mathbf A_{22} & \mathbf A_{23} \\ \mathbf A_{13}^{\mathrm{T}} & \mathbf A_{23}^{\mathrm{T}} & \mathbf A_{33} \\ \end{pmatrix} \end{align} </math> with known Cholesky decomposition <math display=block>\begin{align} \tilde{\mathbf{S}} &= \begin{pmatrix} \mathbf S_{11} & \mathbf S_{12} & \mathbf S_{13} \\ 0 & \mathbf S_{22} & \mathbf S_{23} \\ 0 & 0 & \mathbf S_{33} \\ \end{pmatrix} \end{align} </math> and the desire to determine the Cholesky factor <math display=block>\begin{align} \mathbf{L} &= \begin{pmatrix} \mathbf L_{11} & \mathbf L_{13} \\ 0 & \mathbf L_{33} \\ \end{pmatrix} \end{align} </math> of the matrix <math display=inline> \mathbf A </math> with rows and columns removed, <math display=block>\begin{align} \mathbf{A} &= \begin{pmatrix} \mathbf A_{11} & \mathbf A_{13} \\ \mathbf A_{13}^{\mathrm{T}} & \mathbf A_{33} \\ \end{pmatrix}, \end{align} </math> yields the following rules: <math display=block>\begin{align} \mathbf L_{11} &= \mathbf S_{11}, \\ \mathbf L_{13} &= \mathbf S_{13}, \\ \mathbf L_{33} &= \mathrm{chol} \left(\mathbf S_{33}^{\mathrm{T}} \mathbf S_{33} + \mathbf S_{23}^{\mathrm{T}} \mathbf S_{23}\right). \end{align} </math> Notice that the equations above that involve finding the Cholesky decomposition of a new matrix are all of the form <math display=inline> \tilde{\mathbf{A}} = \mathbf{A} \pm \mathbf{x} \mathbf{x}^* </math>, which allows them to be efficiently calculated using the update and downdate procedures detailed in the previous section.<ref>Osborne, M. (2010), Appendix B.</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)