Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Conjugate gradient method
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==As an iterative method== If we choose the conjugate vectors <math>\mathbf{p}_k</math> carefully, then we may not need all of them to obtain a good approximation to the solution <math>\mathbf{x}_*</math>. So, we want to regard the conjugate gradient method as an iterative method. This also allows us to approximately solve systems where <math>n</math> is so large that the direct method would take too much time. We denote the initial guess for <math>\mathbf{x}_*</math> by <math>\mathbf{x}_0</math> (we can assume without loss of generality that <math>\mathbf{x}_0 = \mathbf{0}</math>, otherwise consider the system <math>\mathbf{Az} = \mathbf{b} - \mathbf{Ax}_0</math> instead). Starting with <math>\mathbf{x}_0</math> we search for the solution and in each iteration we need a metric to tell us whether we are closer to the solution <math>\mathbf{x}_*</math> (that is unknown to us). This metric comes from the fact that the solution <math>\mathbf{x}_*</math> is also the unique minimizer of the following [[quadratic function]] :<math> f(\mathbf{x}) = \tfrac12 \mathbf{x}^\mathsf{T} \mathbf{A}\mathbf{x} - \mathbf{x}^\mathsf{T} \mathbf{b}, \qquad \mathbf{x}\in\mathbf{R}^n \,. </math> The existence of a unique minimizer is apparent as its [[Hessian matrix]] of second derivatives is symmetric positive-definite :<math> \mathbf{H}(f(\mathbf{x})) = \mathbf{A} \,, </math> and that the minimizer (use <math>Df(\mathbf{x}) = 0</math>) solves the initial problem follows from its first derivative :<math> \nabla f(\mathbf{x}) = \mathbf{A} \mathbf{x} - \mathbf{b} \,. </math> <!-- so if f('''x''') becomes smaller in an iteration it means that we are closer to {{math|'''x'''<sub>∗</sub>}}.Note that the +c has been added to the equation above. Without the c the trivial solution of x=0 is the minimum and the solution of the original equation. If x0 is the minimum, then the quadratic can be described as (x^T-x0^T)*A*(x-x0)-c0=0 (c0 is f at the minimum while x0 is the solution at the minimum). Expanding this you get x^T*X*x-x^T*b+c where b=2*A*x0 and c=x0^T*A*x0-c0 ... wait a minute... for the purposes of minimization c0 does not matter... if this is the case then c0 could be selected such that c0=x0^T*A*x0 in which case the original form x^T*X*x-x^T*b is still valid... --> This suggests taking the first basis vector <math>\mathbf{p}_0</math> to be the negative of the gradient of <math>f</math> at <math>\mathbf{x} = \mathbf{x}_0</math>. The gradient of <math>f</math> equals <math>\mathbf{Ax} - \mathbf{b}</math>. Starting with an initial guess <math>\mathbf{x}_0</math>, this means we take <math>\mathbf{p}_0 = \mathbf{b} - \mathbf{Ax}_0 </math>. The other vectors in the basis will be conjugate to the gradient, hence the name ''conjugate gradient method''. Note that <math>\mathbf{p}_0</math> is also the [[residual (numerical analysis)|residual]] provided by this initial step of the algorithm. Let <math>\mathbf{r}_k</math> be the [[residual (numerical analysis)|residual]] at the <math>k</math>th step: :<math> \mathbf{r}_k = \mathbf{b} - \mathbf{Ax}_k.</math> As observed above, <math>\mathbf{r}_k</math> is the negative gradient of <math>f</math> at <math>\mathbf{x}_k</math>, so the [[gradient descent]] method would require to move in the direction '''r'''<sub>''k''</sub>. Here, however, we insist that the directions <math>\mathbf{p}_k</math> must be conjugate to each other. A practical way to enforce this is by requiring that the next search direction be built out of the current residual and all previous search directions. The conjugation constraint is an orthonormal-type constraint and hence the algorithm can be viewed as an example of [[Gram–Schmidt process|Gram-Schmidt orthonormalization]]. This gives the following expression: :<math>\mathbf{p}_{k} = \mathbf{r}_{k} - \sum_{i < k}\frac{\mathbf{r}_{k}^\mathsf{T} \mathbf{A} \mathbf{p}_i}{\mathbf{p}_i^\mathsf{T}\mathbf{A} \mathbf{p}_i} \mathbf{p}_i</math> (see the picture at the top of the article for the effect of the conjugacy constraint on convergence). Following this direction, the next optimal location is given by :<math> \mathbf{x}_{k+1} = \mathbf{x}_k + \alpha_k \mathbf{p}_k </math> with :<math> \alpha_{k} = \frac{\mathbf{p}_k^\mathsf{T} (\mathbf{b} - \mathbf{Ax}_k )}{\mathbf{p}_k^\mathsf{T} \mathbf{A} \mathbf{p}_k} = \frac{\mathbf{p}_{k}^\mathsf{T} \mathbf{r}_{k}}{\mathbf{p}_{k}^\mathsf{T} \mathbf{A} \mathbf{p}_{k}}, </math> where the last equality follows from the definition of <math>\mathbf{r}_k</math> . The expression for <math> \alpha_k </math> can be derived if one substitutes the expression for '''x'''<sub>''k''+1</sub> into ''f'' and minimizing it with respect to <math> \alpha_k </math> :<math> \begin{align} f(\mathbf{x}_{k+1}) &= f(\mathbf{x}_k + \alpha_k \mathbf{p}_k) =: g(\alpha_k) \\ g'(\alpha_k) &\overset{!}{=} 0 \quad \Rightarrow \quad \alpha_{k} = \frac{\mathbf{p}_k^\mathsf{T} (\mathbf{b} - \mathbf{Ax}_k)}{\mathbf{p}_k^\mathsf{T} \mathbf{A} \mathbf{p}_k} \,. \end{align} </math> ===The resulting algorithm=== The above algorithm gives the most straightforward explanation of the conjugate gradient method. Seemingly, the algorithm as stated requires storage of all previous searching directions and residue vectors, as well as many matrix–vector multiplications, and thus can be computationally expensive. However, a closer analysis of the algorithm shows that <math>\mathbf{r}_i</math> is orthogonal to <math>\mathbf{r}_j</math>, i.e. <math>\mathbf{r}_i^\mathsf{T} \mathbf{r}_j=0 </math>, for <math>i \neq j</math>. And <math>\mathbf{p}_i</math> is <math>\mathbf{A}</math>-orthogonal to <math>\mathbf{p}_j</math>, i.e. <math>\mathbf{p}_i^\mathsf{T} \mathbf{A} \mathbf{p}_j=0 </math>, for <math>i \neq j</math>. This can be regarded that as the algorithm progresses, <math>\mathbf{p}_i</math> and <math>\mathbf{r}_i</math> span the same [[Krylov subspace]], where <math>\mathbf{r}_i</math> form the orthogonal basis with respect to the standard inner product, and <math>\mathbf{p}_i</math> form the orthogonal basis with respect to the inner product induced by <math>\mathbf{A}</math>. Therefore, <math>\mathbf{x}_k</math> can be regarded as the projection of <math>\mathbf{x}</math> on the Krylov subspace. That is, if the CG method starts with <math>\mathbf{x}_0 = 0</math>, then<ref>{{Cite journal |last1=Paquette |first1=Elliot |last2=Trogdon |first2=Thomas |date=March 2023 |title=Universality for the Conjugate Gradient and MINRES Algorithms on Sample Covariance Matrices |url=https://onlinelibrary.wiley.com/doi/10.1002/cpa.22081 |journal=Communications on Pure and Applied Mathematics |language=en |volume=76 |issue=5 |pages=1085–1136 |doi=10.1002/cpa.22081 |issn=0010-3640|arxiv=2007.00640 }}</ref><math display="block">x_k = \mathrm{argmin}_{y \in \mathbb{R}^n} {\left\{(x-y)^{\top} A(x-y): y \in \operatorname{span}\left\{b, A b, \ldots, A^{k-1} b\right\}\right\}}</math>The algorithm is detailed below for solving <math>\mathbf{A} \mathbf{x}= \mathbf{b}</math> where <math>\mathbf{A}</math> is a real, symmetric, positive-definite matrix. The input vector <math>\mathbf{x}_0</math> can be an approximate initial solution or <math>\mathbf{0}</math>. It is a different formulation of the exact procedure described above. :<math>\begin{align} & \mathbf{r}_0 := \mathbf{b} - \mathbf{A x}_0 \\ & \hbox{if } \mathbf{r}_{0} \text{ is sufficiently small, then return } \mathbf{x}_{0} \text{ as the result}\\ & \mathbf{p}_0 := \mathbf{r}_0 \\ & k := 0 \\ & \text{repeat} \\ & \qquad \alpha_k := \frac{\mathbf{r}_k^\mathsf{T} \mathbf{r}_k}{\mathbf{p}_k^\mathsf{T} \mathbf{A p}_k} \\ & \qquad \mathbf{x}_{k+1} := \mathbf{x}_k + \alpha_k \mathbf{p}_k \\ & \qquad \mathbf{r}_{k+1} := \mathbf{r}_k - \alpha_k \mathbf{A p}_k \\ & \qquad \hbox{if } \mathbf{r}_{k+1} \text{ is sufficiently small, then exit loop} \\ & \qquad \beta_k := \frac{\mathbf{r}_{k+1}^\mathsf{T} \mathbf{r}_{k+1}}{\mathbf{r}_k^\mathsf{T} \mathbf{r}_k} \\ & \qquad \mathbf{p}_{k+1} := \mathbf{r}_{k+1} + \beta_k \mathbf{p}_k \\ & \qquad k := k + 1 \\ & \text{end repeat} \\ & \text{return } \mathbf{x}_{k+1} \text{ as the result} \end{align}</math> This is the most commonly used algorithm. The same formula for <math>\beta_k</math> is also used in the Fletcher–Reeves [[nonlinear conjugate gradient method]]. ====Restarts==== We note that <math>\mathbf{x}_{1}</math> is computed by the [[Gradient descent#Solution of a linear system|gradient descent]] method applied to <math>\mathbf{x}_{0}</math>. Setting <math>\beta_{k}=0</math> would similarly make <math>\mathbf{x}_{k+1}</math> computed by the [[Gradient descent#Solution of a linear system|gradient descent]] method from <math>\mathbf{x}_{k}</math>, i.e., can be used as a simple implementation of a restart of the conjugate gradient iterations.<ref name="BP" /> Restarts could slow down convergence, but may improve stability if the conjugate gradient method misbehaves, e.g., due to [[round-off error]]. ====Explicit residual calculation==== The formulas <math>\mathbf{x}_{k+1} := \mathbf{x}_k + \alpha_k \mathbf{p}_k</math> and <math>\mathbf{r}_k := \mathbf{b} - \mathbf{A x}_k</math>, which both hold in exact arithmetic, make the formulas <math>\mathbf{r}_{k+1} := \mathbf{r}_k - \alpha_k \mathbf{A p}_k</math> and <math>\mathbf{r}_{k+1} := \mathbf{b} - \mathbf{A x}_{k+1}</math> mathematically equivalent. The former is used in the algorithm to avoid an extra multiplication by <math>\mathbf{A}</math> since the vector <math>\mathbf{A p}_k</math> is already computed to evaluate <math>\alpha_k</math>. The latter may be more accurate, substituting the explicit calculation <math>\mathbf{r}_{k+1} := \mathbf{b} - \mathbf{A x}_{k+1}</math> for the implicit one by the recursion subject to [[round-off error]] accumulation, and is thus recommended for an occasional evaluation.<ref>{{cite book | first=Jonathan R | last=Shewchuk |title=An Introduction to the Conjugate Gradient Method Without the Agonizing Pain |year=1994 |url=http://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.pdf }}</ref> A norm of the residual is typically used for stopping criteria. The norm of the explicit residual <math>\mathbf{r}_{k+1} := \mathbf{b} - \mathbf{A x}_{k+1}</math> provides a guaranteed level of accuracy both in exact arithmetic and in the presence of the [[rounding errors]], where convergence naturally stagnates. In contrast, the implicit residual <math>\mathbf{r}_{k+1} := \mathbf{r}_k - \alpha_k \mathbf{A p}_k</math> is known to keep getting smaller in amplitude well below the level of [[rounding errors]] and thus cannot be used to determine the stagnation of convergence. ====Computation of alpha and beta==== In the algorithm, <math>\alpha_k</math> is chosen such that <math>\mathbf{r}_{k+1}</math> is orthogonal to <math>\mathbf{r}_{k}</math>. The denominator is simplified from :<math>\alpha_k = \frac{\mathbf{r}_{k}^\mathsf{T} \mathbf{r}_{k}}{\mathbf{r}_{k}^\mathsf{T} \mathbf{A} \mathbf{p}_k} = \frac{\mathbf{r}_k^\mathsf{T} \mathbf{r}_k}{\mathbf{p}_k^\mathsf{T} \mathbf{A p}_k} </math> since <math>\mathbf{r}_{k+1} = \mathbf{p}_{k+1}-\mathbf{\beta}_{k}\mathbf{p}_{k}</math>. The <math>\beta_k</math> is chosen such that <math>\mathbf{p}_{k+1}</math> is conjugate to <math>\mathbf{p}_{k}</math>. Initially, <math>\beta_k</math> is :<math>\beta_k = - \frac{\mathbf{r}_{k+1}^\mathsf{T} \mathbf{A} \mathbf{p}_k}{\mathbf{p}_k^\mathsf{T} \mathbf{A} \mathbf{p}_k}</math> using :<math>\mathbf{r}_{k+1} = \mathbf{r}_{k} - \alpha_{k} \mathbf{A} \mathbf{p}_{k}</math> and equivalently <math> \mathbf{A} \mathbf{p}_{k} = \frac{1}{\alpha_{k}} (\mathbf{r}_{k} - \mathbf{r}_{k+1}), </math> the numerator of <math>\beta_k</math> is rewritten as :<math> \mathbf{r}_{k+1}^\mathsf{T} \mathbf{A} \mathbf{p}_k = \frac{1}{\alpha_k} \mathbf{r}_{k+1}^\mathsf{T} (\mathbf{r}_k - \mathbf{r}_{k+1}) = - \frac{1}{\alpha_k} \mathbf{r}_{k+1}^\mathsf{T} \mathbf{r}_{k+1} </math> because <math>\mathbf{r}_{k+1}</math> and <math>\mathbf{r}_{k}</math> are orthogonal by design. The denominator is rewritten as :<math> \mathbf{p}_k^\mathsf{T} \mathbf{A} \mathbf{p}_k = (\mathbf{r}_k + \beta_{k-1} \mathbf{p}_{k-1})^\mathsf{T} \mathbf{A} \mathbf{p}_k = \frac{1}{\alpha_k} \mathbf{r}_k^\mathsf{T} (\mathbf{r}_k - \mathbf{r}_{k+1}) = \frac{1}{\alpha_k} \mathbf{r}_k^\mathsf{T} \mathbf{r}_k </math> using that the search directions <math>\mathbf{p}_k</math> are conjugated and again that the residuals are orthogonal. This gives the <math>\beta</math> in the algorithm after cancelling <math>\alpha_k</math>. ====Example code in [[Julia (programming language)]]==== <syntaxhighlight lang="julia" line="1" start="1"> """ conjugate_gradient!(A, b, x) Return the solution to `A * x = b` using the conjugate gradient method. """ function conjugate_gradient!( A::AbstractMatrix, b::AbstractVector, x::AbstractVector; tol=eps(eltype(b)) ) # Initialize residual vector residual = b - A * x # Initialize search direction vector search_direction = copy(residual) # Compute initial squared residual norm norm(x) = sqrt(sum(x.^2)) old_resid_norm = norm(residual) # Iterate until convergence while old_resid_norm > tol A_search_direction = A * search_direction step_size = old_resid_norm^2 / (search_direction' * A_search_direction) # Update solution @. x = x + step_size * search_direction # Update residual @. residual = residual - step_size * A_search_direction new_resid_norm = norm(residual) # Update search direction vector @. search_direction = residual + (new_resid_norm / old_resid_norm)^2 * search_direction # Update squared residual norm for next iteration old_resid_norm = new_resid_norm end return x end </syntaxhighlight> ====Example code in [[MATLAB]]==== <syntaxhighlight lang="matlab" line="1" start="1"> function x = conjugate_gradient(A, b, x0, tol) % Return the solution to `A * x = b` using the conjugate gradient method. % Reminder: A should be symmetric and positive definite. if nargin < 4 tol = eps; end r = b - A * x0; p = r; rsold = r' * r; x = x0; while sqrt(rsold) > tol Ap = A * p; alpha = rsold / (p' * Ap); x = x + alpha * p; r = r - alpha * Ap; rsnew = r' * r; p = r + (rsnew / rsold) * p; rsold = rsnew; end end </syntaxhighlight> ===Numerical example=== Consider the linear system '''Ax''' = '''b''' given by :<math>\mathbf{A} \mathbf{x}= \begin{bmatrix} 4 & 1 \\ 1 & 3 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 1 \\ 2 \end{bmatrix},</math> we will perform two steps of the conjugate gradient method beginning with the initial guess :<math>\mathbf{x}_0 = \begin{bmatrix} 2 \\ 1 \end{bmatrix}</math> in order to find an approximate solution to the system. ====Solution==== For reference, the exact solution is :<math> \mathbf{x} = \begin{bmatrix} \frac{1}{11} \\\\ \frac{7}{11} \end{bmatrix} \approx \begin{bmatrix} 0.0909 \\\\ 0.6364 \end{bmatrix}</math> Our first step is to calculate the residual vector '''r'''<sub>0</sub> associated with '''x'''<sub>0</sub>. This residual is computed from the formula '''r'''<sub>0</sub> = '''b''' - '''Ax'''<sub>0</sub>, and in our case is equal to :<math>\mathbf{r}_0 = \begin{bmatrix} 1 \\ 2 \end{bmatrix} - \begin{bmatrix} 4 & 1 \\ 1 & 3 \end{bmatrix} \begin{bmatrix} 2 \\ 1 \end{bmatrix} = \begin{bmatrix}-8 \\ -3 \end{bmatrix} = \mathbf{p}_0.</math> Since this is the first iteration, we will use the residual vector '''r'''<sub>0</sub> as our initial search direction '''p'''<sub>0</sub>; the method of selecting '''p'''<sub>''k''</sub> will change in further iterations. We now compute the scalar {{math|''α''<sub>0</sub>}} using the relationship :<math> \alpha_0 = \frac{\mathbf{r}_0^\mathsf{T} \mathbf{r}_0}{\mathbf{p}_0^\mathsf{T} \mathbf{A p}_0} = \frac{\begin{bmatrix} -8 & -3 \end{bmatrix} \begin{bmatrix} -8 \\ -3 \end{bmatrix}}{ \begin{bmatrix} -8 & -3 \end{bmatrix} \begin{bmatrix} 4 & 1 \\ 1 & 3 \end{bmatrix} \begin{bmatrix} -8 \\ -3 \end{bmatrix} } =\frac{73}{331}\approx0.2205</math> We can now compute '''x'''<sub>1</sub> using the formula :<math>\mathbf{x}_1 = \mathbf{x}_0 + \alpha_0\mathbf{p}_0 = \begin{bmatrix} 2 \\ 1 \end{bmatrix} + \frac{73}{331} \begin{bmatrix} -8 \\ -3 \end{bmatrix} \approx \begin{bmatrix} 0.2356 \\ 0.3384 \end{bmatrix}.</math> This result completes the first iteration, the result being an "improved" approximate solution to the system, '''x'''<sub>1</sub>. We may now move on and compute the next residual vector '''r'''<sub>1</sub> using the formula :<math>\mathbf{r}_1 = \mathbf{r}_0 - \alpha_0 \mathbf{A} \mathbf{p}_0 = \begin{bmatrix} -8 \\ -3 \end{bmatrix} - \frac{73}{331} \begin{bmatrix} 4 & 1 \\ 1 & 3 \end{bmatrix} \begin{bmatrix} -8 \\ -3 \end{bmatrix} \approx \begin{bmatrix} -0.2810 \\ 0.7492 \end{bmatrix}.</math> Our next step in the process is to compute the scalar {{math|''β''<sub>0</sub>}} that will eventually be used to determine the next search direction '''p'''<sub>1</sub>. :<math>\beta_0 = \frac{\mathbf{r}_1^\mathsf{T} \mathbf{r}_1}{\mathbf{r}_0^\mathsf{T} \mathbf{r}_0} \approx \frac{\begin{bmatrix} -0.2810 & 0.7492 \end{bmatrix} \begin{bmatrix} -0.2810 \\ 0.7492 \end{bmatrix}}{\begin{bmatrix} -8 & -3 \end{bmatrix} \begin{bmatrix} -8 \\ -3 \end{bmatrix}} = 0.0088.</math> Now, using this scalar {{math|''β''<sub>0</sub>}}, we can compute the next search direction '''p'''<sub>1</sub> using the relationship :<math>\mathbf{p}_1 = \mathbf{r}_1 + \beta_0 \mathbf{p}_0 \approx \begin{bmatrix} -0.2810 \\ 0.7492 \end{bmatrix} + 0.0088 \begin{bmatrix} -8 \\ -3 \end{bmatrix} = \begin{bmatrix} -0.3511 \\ 0.7229 \end{bmatrix}.</math> We now compute the scalar {{math|''α''<sub>1</sub>}} using our newly acquired '''p'''<sub>1</sub> using the same method as that used for {{math|''α''<sub>0</sub>}}. :<math> \alpha_1 = \frac{\mathbf{r}_1^\mathsf{T} \mathbf{r}_1}{\mathbf{p}_1^\mathsf{T} \mathbf{A p}_1} \approx \frac{\begin{bmatrix} -0.2810 & 0.7492 \end{bmatrix} \begin{bmatrix} -0.2810 \\ 0.7492 \end{bmatrix}}{ \begin{bmatrix} -0.3511 & 0.7229 \end{bmatrix} \begin{bmatrix} 4 & 1 \\ 1 & 3 \end{bmatrix} \begin{bmatrix} -0.3511 \\ 0.7229 \end{bmatrix} } = 0.4122.</math> Finally, we find '''x'''<sub>2</sub> using the same method as that used to find '''x'''<sub>1</sub>. :<math>\mathbf{x}_2 = \mathbf{x}_1 + \alpha_1 \mathbf{p}_1 \approx \begin{bmatrix} 0.2356 \\ 0.3384 \end{bmatrix} + 0.4122 \begin{bmatrix} -0.3511 \\ 0.7229 \end{bmatrix} = \begin{bmatrix} 0.0909 \\ 0.6364 \end{bmatrix}.</math> The result, '''x'''<sub>2</sub>, is a "better" approximation to the system's solution than '''x'''<sub>1</sub> and '''x'''<sub>0</sub>. If exact arithmetic were to be used in this example instead of limited-precision, then the exact solution would theoretically have been reached after ''n'' = 2 iterations (''n'' being the order of the system).
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)