Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Iterative method
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Stationary iterative methods=== ==== Introduction ==== Stationary iterative methods solve a linear system with an [[Operator (mathematics)|operator]] approximating the original one; and based on a measurement of the error in the result ([[Residual (numerical analysis)|the residual]]), form a "correction equation" for which this process is repeated. While these methods are simple to derive, implement, and analyze, convergence is only guaranteed for a limited class of matrices. ====Definition==== An ''iterative method'' is defined by :<math> \mathbf{x}^{k+1} := \Psi ( \mathbf{x}^k ), \quad k \geq 0 </math> and for a given linear system <math> A\mathbf x= \mathbf b </math> with exact solution <math> \mathbf{x}^* </math> the ''error'' by :<math> \mathbf{e}^k := \mathbf{x}^k - \mathbf{x}^*, \quad k \geq 0. </math> An iterative method is called ''linear'' if there exists a matrix <math> C \in \R^{n\times n} </math> such that :<math> \mathbf{e}^{k+1} = C \mathbf{e}^k \quad \forall k \geq 0 </math> and this matrix is called the ''iteration matrix''. An iterative method with a given iteration matrix <math> C </math> is called ''convergent'' if the following holds :<math> \lim_{k\rightarrow \infty} C^k=0. </math> An important theorem states that for a given iterative method and its iteration matrix <math> C </math> it is convergent if and only if its [[spectral radius]] <math> \rho(C) </math> is smaller than unity, that is, :<math> \rho(C) < 1. </math> The basic iterative methods work by [[Matrix splitting|splitting]] the matrix <math> A </math> into :<math> A = M - N </math> and here the matrix <math> M </math> should be easily [[Invertible matrix|invertible]]. The iterative methods are now defined as :<math> M \mathbf{x}^{k+1} = N \mathbf{x}^k + b, \quad k \geq 0, </math> or, equivalently, :<math> \mathbf{x}^{k+1} = \mathbf{x}^k + M^{-1} (b - A \mathbf{x}^k), \quad k \geq 0. </math> From this follows that the iteration matrix is given by :<math> C = I - M^{-1}A = M^{-1}N. </math> ====Examples==== Basic examples of stationary iterative methods use a splitting of the matrix <math> A </math> such as :<math> A = D+L+U\,,\quad D := \text{diag}( (a_{ii})_i) </math> where <math> D </math> is only the diagonal part of <math> A </math>, and <math> L </math> is the strict lower [[Triangular matrix|triangular part]] of <math> A </math>. Respectively, <math> U </math> is the strict upper triangular part of <math> A </math>. * [[Modified Richardson iteration|Richardson method]]: <math> M:=\frac{1}{\omega} I \quad (\omega \neq 0) </math> * [[Jacobi method]]: <math> M:=D </math> * [[Jacobi method#Weighted Jacobi method|Damped Jacobi method]]: <math> M:=\frac{1}{\omega}D \quad (\omega \neq 0) </math> * [[Gauss–Seidel method]]: <math> M:=D+L </math> * [[Successive over-relaxation|Successive over-relaxation method]] (SOR): <math> M:=\frac{1}{\omega}D+L \quad (\omega \neq 0) </math> * [[Symmetric successive over-relaxation]] (SSOR): <math> M := \frac{1}{\omega (2-\omega)} (D+\omega L) D^{-1} (D+\omega U) \quad (\omega \not \in \{0,2\}) </math> Linear stationary iterative methods are also called [[Relaxation (iterative method)|relaxation methods]].
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)