Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Determinant
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Calculation == Determinants are mainly used as a theoretical tool. They are rarely calculated explicitly in [[numerical linear algebra]], where for applications such as checking invertibility and finding eigenvalues the determinant has largely been supplanted by other techniques.<ref>"... we mention that the determinant, though a convenient notion theoretically, rarely finds a useful role in numerical algorithms.", see {{harvnb|Trefethen|Bau III|1997|loc=Lecture 1}}.</ref> [[Computational geometry]], however, does frequently use calculations related to determinants.<ref>{{harvnb|Fisikopoulos|Peñaranda|2016|loc=§1.1, §4.3}}</ref> While the determinant can be computed directly using the Leibniz rule this approach is extremely inefficient for large matrices, since that formula requires calculating <math>n!</math> (<math>n</math> [[factorial]]) products for an <math>n \times n</math> matrix. Thus, the number of required operations grows very quickly: it is [[Big O notation|of order]] <math>n!</math>. The Laplace expansion is similarly inefficient. Therefore, more involved techniques have been developed for calculating determinants. ===Gaussian elimination=== [[Gaussian elimination]] consists of left multiplying a matrix by [[elementary matrices]] for getting a matrix in a [[row echelon form]]. One can restrict the computation to elementary matrices of determinant {{math|1}}. In this case, the determinant of the resulting row echelon form equals the determinant of the initial matrix. As a row echelon form is a [[triangular matrix]], its determinant is the product of the entries of its diagonal. So, the determinant can be computed for almost free from the result of a Gaussian elimination. === Decomposition methods === Some methods compute <math>\det(A)</math> by writing the matrix as a product of matrices whose determinants can be more easily computed. Such techniques are referred to as decomposition methods. Examples include the [[LU decomposition]], the [[QR decomposition]] or the [[Cholesky decomposition]] (for [[Positive definite matrix|positive definite matrices]]). These methods are of order <math>\operatorname O(n^3)</math>, which is a significant improvement over <math>\operatorname O (n!)</math>.<ref>{{cite arXiv|last=Camarero|first=Cristóbal|date=2018-12-05|title=Simple, Fast and Practicable Algorithms for Cholesky, LU and QR Decomposition Using Fast Rectangular Matrix Multiplication|class=cs.NA|eprint=1812.02056}}</ref> For example, LU decomposition expresses <math>A</math> as a product :<math> A = PLU. </math> of a [[permutation matrix]] <math>P</math> (which has exactly a single <math>1</math> in each column, and otherwise zeros), a lower triangular matrix <math>L</math> and an upper triangular matrix <math>U</math>. The determinants of the two triangular matrices <math>L</math> and <math>U</math> can be quickly calculated, since they are the products of the respective diagonal entries. The determinant of <math>P</math> is just the sign <math>\varepsilon</math> of the corresponding permutation (which is <math>+1</math> for an even number of permutations and is <math> -1 </math> for an odd number of permutations). Once such a LU decomposition is known for <math>A</math>, its determinant is readily computed as :<math> \det(A) = \varepsilon \det(L)\cdot\det(U). </math> === Further methods === The order <math>\operatorname O(n^3)</math> reached by decomposition methods has been improved by different methods. If two matrices of order <math>n</math> can be multiplied in time <math>M(n)</math>, where <math>M(n) \ge n^a</math> for some <math>a>2</math>, then there is an algorithm computing the determinant in time <math>O(M(n))</math>.<ref>{{harvnb|Bunch|Hopcroft|1974}}</ref> This means, for example, that an <math>\operatorname O(n^{2.376})</math> algorithm for computing the determinant exists based on the [[Coppersmith–Winograd algorithm]]. This exponent has been further lowered, as of 2016, to 2.373.<ref>{{harvnb|Fisikopoulos|Peñaranda|2016|loc=§1.1}}</ref> In addition to the complexity of the algorithm, further criteria can be used to compare algorithms. Especially for applications concerning matrices over rings, algorithms that compute the determinant without any divisions exist. (By contrast, Gauss elimination requires divisions.) One such algorithm, having complexity <math>\operatorname O(n^4)</math> is based on the following idea: one replaces permutations (as in the Leibniz rule) by so-called [[closed ordered walk]]s, in which several items can be repeated. The resulting sum has more terms than in the Leibniz rule, but in the process several of these products can be reused, making it more efficient than naively computing with the Leibniz rule.<ref>{{harvnb|Rote|2001}}</ref> Algorithms can also be assessed according to their [[bit complexity]], i.e., how many bits of accuracy are needed to store intermediate values occurring in the computation. For example, the [[Gaussian elimination]] (or LU decomposition) method is of order <math>\operatorname O(n^3)</math>, but the bit length of intermediate values can become exponentially long.<ref>{{Cite conference | first1 = Xin Gui | last1 = Fang | first2 = George | last2 = Havas | title = On the worst-case complexity of integer Gaussian elimination | book-title = Proceedings of the 1997 international symposium on Symbolic and algebraic computation | conference = ISSAC '97 | pages = 28–31 | publisher = ACM | year = 1997 | location = Kihei, Maui, Hawaii, United States | url = http://perso.ens-lyon.fr/gilles.villard/BIBLIOGRAPHIE/PDF/ft_gateway.cfm.pdf | doi = 10.1145/258726.258740 | isbn = 0-89791-875-4 | access-date = 2011-01-22 | archive-url = https://web.archive.org/web/20110807042828/http://perso.ens-lyon.fr/gilles.villard/BIBLIOGRAPHIE/PDF/ft_gateway.cfm.pdf | archive-date = 2011-08-07 | url-status = dead }}</ref> By comparison, the [[Bareiss Algorithm]], is an exact-division method (so it does use division, but only in cases where these divisions can be performed without remainder) is of the same order, but the bit complexity is roughly the bit size of the original entries in the matrix times <math>n</math>.<ref>{{harvnb|Fisikopoulos|Peñaranda|2016|loc=§1.1}}, {{harvnb|Bareiss|1968}}</ref> If the determinant of ''A'' and the inverse of ''A'' have already been computed, the [[matrix determinant lemma]] allows rapid calculation of the determinant of {{math|''A'' + ''uv''<sup>T</sup>}}, where ''u'' and ''v'' are column vectors. Charles Dodgson (i.e. [[Lewis Carroll]] of ''[[Alice's Adventures in Wonderland]]'' fame) invented a method for computing determinants called [[Dodgson condensation]]. Unfortunately this interesting method does not always work in its original form.<ref>{{Cite journal |last=Abeles |first=Francine F. |date=2008 |title=Dodgson condensation: The historical and mathematical development of an experimental method |url=https://www.academia.edu/10352246 |journal=Linear Algebra and Its Applications |language=en |volume=429 |issue=2–3 |pages=429–438 |doi=10.1016/j.laa.2007.11.022|doi-access=free }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)