Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
System of linear equations
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Other methods=== {{Further|Numerical solution of linear systems}} While systems of three or four equations can be readily solved by hand (see [[Cracovian]]), computers are often used for larger systems. The standard algorithm for solving a system of linear equations is based on Gaussian elimination with some modifications. Firstly, it is essential to avoid division by small numbers, which may lead to inaccurate results. This can be done by reordering the equations if necessary, a process known as [[Pivot element|''pivoting'']]. Secondly, the algorithm does not exactly do Gaussian elimination, but it computes the [[LU decomposition]] of the matrix ''A''. This is mostly an organizational tool, but it is much quicker if one has to solve several systems with the same matrix ''A'' but different vectors '''b'''. If the matrix ''A'' has some special structure, this can be exploited to obtain faster or more accurate algorithms. For instance, systems with a [[symmetric matrix|symmetric]] [[positive-definite matrix|positive definite]] matrix can be solved twice as fast with the [[Cholesky decomposition]]. [[Levinson recursion]] is a fast method for [[Toeplitz matrix|Toeplitz matrices]]. Special methods exist also for matrices with many zero elements (so-called [[sparse matrix|sparse matrices]]), which appear often in applications. A completely different approach is often taken for very large systems, which would otherwise take too much time or memory. The idea is to start with an initial approximation to the solution (which does not have to be accurate at all), and to change this approximation in several steps to bring it closer to the true solution. Once the approximation is sufficiently accurate, this is taken to be the solution to the system. This leads to the class of [[iterative method]]s. For some sparse matrices, the introduction of randomness improves the speed of the iterative methods.<ref>{{cite news |last1=Hartnett |first1=Kevin |title=New Algorithm Breaks Speed Limit for Solving Linear Equations |url=https://www.quantamagazine.org/new-algorithm-breaks-speed-limit-for-solving-linear-equations-20210308/ |access-date=March 9, 2021 |work=[[Quanta Magazine]] |date=March 8, 2021}}</ref> One example of an iterative method is the [[Jacobi method]], where the matrix <math>A</math> is split into its diagonal component <math>D</math> and its non-diagonal component <math>L+U</math>. An initial guess <math>{\bold x}^{(0)}</math> is used at the start of the algorithm. Each subsequent guess is computed using the iterative equation: : <math>{\bold x}^{(k+1)} = D^{-1}({\bold b} - (L+U){\bold x}^{(k)})</math> When the difference between guesses <math>{\bold x}^{(k)}</math> and <math>{\bold x}^{(k+1)}</math> is sufficiently small, the algorithm is said to have ''converged'' on the solution.<ref>{{cite web |url=https://mathworld.wolfram.com/JacobiMethod.html |title=Jacobi Method }}</ref> There is also a [[quantum algorithm for linear systems of equations]].{{sfnp|Harrow|Hassidim|Lloyd|2009}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)