Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Conjugate gradient method
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Finite Termination Property== Under exact arithmetic, the number of iterations required is no more than the order of the matrix. This behavior is known as the '''finite termination property''' of the conjugate gradient method. It refers to the method's ability to reach the exact solution of a linear system in a finite number of steps—at most equal to the dimension of the system—when exact arithmetic is used. This property arises from the fact that, at each iteration, the method generates a residual vector that is orthogonal to all previous residuals. These residuals form a mutually orthogonal set. In an \(n\)-dimensional space, it is impossible to construct more than \(n\) linearly independent and mutually orthogonal vectors unless one of them is the zero vector. Therefore, once a zero residual appears, the method has reached the solution and must terminate. This ensures that the conjugate gradient method converges in at most \(n\) steps. To demonstrate this, consider the system: <math> A = \begin{bmatrix} 3 & -2 \\ -2 & 4 \end{bmatrix}, \quad \mathbf{b} = \begin{bmatrix} 1 \\ 1 \end{bmatrix} </math> We start from an initial guess <math>\mathbf{x}_0 = \begin{bmatrix} 1 \\ 2 \end{bmatrix}</math>. Since <math>A</math> is symmetric positive-definite and the system is 2-dimensional, the conjugate gradient method should find the exact solution in no more than 2 steps. The following MATLAB code demonstrates this behavior: <syntaxhighlight lang="matlab"> A = [3, -2; -2, 4]; x_true = [1; 1]; b = A * x_true; x = [1; 2]; % initial guess r = b - A * x; p = r; for k = 1:2 Ap = A * p; alpha = (r' * r) / (p' * Ap); x = x + alpha * p; r_new = r - alpha * Ap; beta = (r_new' * r_new) / (r' * r); p = r_new + beta * p; r = r_new; end disp('Exact solution:'); disp(x); </syntaxhighlight> The output confirms that the method reaches <math>\begin{bmatrix} 1 \\ 1 \end{bmatrix}</math> after two iterations, consistent with the theoretical prediction. This example illustrates how the conjugate gradient method behaves as a direct method under idealized conditions. === Application to Sparse Systems=== The finite termination property also has practical implications in solving large sparse systems, which frequently arise in scientific and engineering applications. For instance, discretizing the two-dimensional Laplace equation <math>\nabla^2 u = 0</math> using finite differences on a uniform grid leads to a sparse linear system <math>A \mathbf{x} = \mathbf{b}</math>, where <math>A</math> is symmetric and positive definite. Using a <math>5 \times 5</math> interior grid yields a <math>25 \times 25</math> system, and the coefficient matrix <math>A</math> has a five-point stencil pattern. Each row of <math>A</math> contains at most five nonzero entries corresponding to the central point and its immediate neighbors. For example, the matrix generated from such a grid may look like: <math> A = \begin{bmatrix} 4 & -1 & 0 & \cdots & -1 & 0 & \cdots \\ -1 & 4 & -1 & \cdots & 0 & 0 & \cdots \\ 0 & -1 & 4 & -1 & 0 & 0 & \cdots \\ \vdots & \vdots & \ddots & \ddots & \ddots & \vdots \\ -1 & 0 & \cdots & -1 & 4 & -1 & \cdots \\ 0 & 0 & \cdots & 0 & -1 & 4 & \cdots \\ \vdots & \vdots & \cdots & \cdots & \cdots & \ddots \end{bmatrix} </math> Although the system dimension is 25, the conjugate gradient method is theoretically guaranteed to terminate in at most 25 iterations under exact arithmetic. In practice, convergence often occurs in far fewer steps due to the matrix's spectral properties. This efficiency makes CGM particularly attractive for solving large-scale systems arising from partial differential equations, such as those found in heat conduction, fluid dynamics, and electrostatics.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)