The Wolfram Mathematica notebook which contains the code that produces all the Mathematica output in this web page may be downloaded at this link.

$Post := If[MatrixQ[#1], MatrixForm[#1], #1] & (* outputs matricies in MatrixForm*)
Remove[ "Global`*"] // Quiet (* remove all variables *)

The conjugate gradient method is a family of algorithms for numerical solutions of particular systems of linear equations A x = b with symmetric (or self-adjoint in case when A ∈ ℂn×n) positive definite matrix A. Its variation is known as the biconjugate gradient method and provides a generalization for solving equations with non-symmetric matrices (but it may suffer from numerical instability). The conjugate gradient algorithm always converges to the exact solution of A x = b in a finite number of iteration steps in infinite precision arithmetics (in the absence of roundoff errors). In this sense it is not really an iterative method from mathematical prospective. It can be viewed as a “direct method” like Gaussian elimination, in which a finite set of operations produces the exact solution.

However, there are several good reasons to interpret the conjugate gradient method as an iterative method rather than a direct method.

  • First of all, execution of this algorithm can be stopped partway at any stage, similar to any iterative method.
  • Despite that the conjugate gradient method uses memory of all previous steps---it can be organized in the form of one step iterative algorithm that generates a finite sequence of approximations
    \begin{equation*} {\bf x}^{(k)} = g_k \left( {\bf x}^{(k-1)} \right) , \qquad k = 1, 2, \ldots , m \leqslant n , \end{equation*}
    that always converges (in infinite arithmetics) to the true solution x* of equation A x = b.
  • In finite precision arithmetic, the conjugate gradient method behaves similarly to any iterative method where difference between two successive iterates may be quite subtle.
  • In practice, the conjugate gradient method frequently “converges” to a sufficiently accurate approximation to x in far less than n iterations (where n is the size of the problem).

The conjugate gradient method (henceforth, CG) was developed independently by the American mathematician Magnus Hestenes (1906–1991) and Eduard Steifel (1907–1998) from Switzerland in the early fifties. Later, they met during the "Symposium on Simultaneous Linear Equations and the Determination of Eigenvalues," organized in Los Angeles in 1951 by the Institute of Numerical Analysis of the U.S. National Bureau of Standards. They realized that their algorithms were the same and wrote a famous joint paper that was published in 1952. About the same time, Cornelius Lanczos (1893--1974) invented a similar algorithm, but he applied it to eigenvalue problems. Actually, the CG algorithm is a particular case of more general method known as the Krylov subspace iteration and it is the most famous of these methods because it is now the main algorithm for solving large sparse linear systems. The latter approach was invented in 1931 by the Russian mathematician and naval engineer Alexey Krylov (1863--1945).

To solve an n × n positive definite linear system by a direct method and the CG algorithm, requires n steps by both methods to determine a solution, and the steps of the conjugate gradient method are more computationally expensive than those of Gaussian elimination. However, the conjugate gradient method is useful for solving large sparse systems with nonzero entries occurring in predictable patterns. When the matrix has been preconditioned to make the calculations more effective, good results are obtained in only about √n iterations.


Conjugate Gradient Method

We consider a system of n linear equations with n unknowns that can be written in succinct form

\[ \mathbf{A}\,{\bf x} = {\bf b} \in \mathbb{R}^{n\times 1} , \]
where x is an unknown column vector, b is a known column vector, and A is a known, square, symmetric, positive-definite matrix. As usual, we denote by ℂm×n or ℝm×n the set of all m-by-n matrices with entries from the field of complex or real numbers, respectively. These systems arise in many important settings, such as finite difference and finite element methods for solving partial differential equations, structural analysis, circuit analysis, and math homework.    

Example 1: We start with a one-dimensional version of Poisson's equation for Laplace's operator Δ, \[ -\frac{{\text d}^2 u}{{\text d}x^2} = f(x) , \qquad 0 < x < \ell , \tag{1.1} \] where f(x) is a given function and u(x) is the unknown function that we want to compute. Function u(x) must also satisfy the Dirichlet boundary conditions u(0) = u(ℓ) = 0. Of course, other boundary conditions (such as Neumann or of the third kind) can be supplied to Laplace's operator; however, we choose one of possible simple conditions for our illustrative example.

Recall that a differential operator is called positive (or negative) if all its eigenvalues are positive (or negative, respectively). Since the second order derivative operator \( \displaystyle \quad \texttt{D}^2 = {\text d}^2 / {\text d}x^2 \quad \) is negative, it is customary to negate the Laplacian.

We discretize the boundary value problem by trying to compute an approximate solution at N + 2 evenly spaced points xi between 0 and ℓ: xi = i h, where h = ℓ/(N+1) and 0 ≤ iN + 1. We abbreviate ui = u(xi) and fi = f(xi). To convert differential equation (1.1) into a linear equation for the unknowns u₁, u₂, … , uN, we use finite differences to approximate \[ \left. \frac{{\text d}u}{{\text d}x} \right\vert_{x = (i-1/2) h} \approx \frac{u_{i} - u_{i-1}}{h} , \] \[ \left. \frac{{\text d}u}{{\text d}x} \right\vert_{x = (i+1/2) h} \approx \frac{u_{i+1} - u_{i}}{h} . \] Subtracting these approximations and dividing by h yields the centered difference approximation \[ \left. \frac{{\text d}^2 u}{{\text d}x^2} \right\vert_{x = i h} \approx \frac{2 u_i - u_{i-1} - u_{i+1}}{h^2} + \tau_i , \] where τi, the so-called truncation error, can be shown to be \( \displaystyle \quad O\left( h^2 \cdot \left\| \frac{{\text d}^4 u}{{\text d}x^4} \right\|_{\infty} \right) . \quad \) We may now rewrite equation (1.1) at x = xi as \[ - u_{i-1} + 2\,u_i - u_{i+1} = h^2 f_i + h^2 \tau_i , \tag{1.2} \] where 0 < i < N + 1. From Dirichlet's boundary conditions, we derive that u₀ = uN+1 = 0. Hence, we get N equations with N unknowns u₁, u₂, … , uN: \[ {\bf T}_N {\bf u} = h^2 \begin{pmatrix} f_1 \\ \vdots \\ f_N \end{pmatrix} + h^2 \begin{bmatrix} \tau_1 \\ \vdots \\ \tau_N \end{bmatrix} , \] where \[ {\bf T}_N = \begin{bmatrix} 2 & -1 & 0 & \cdots & 0 \\ -1& 2 & -1 & \cdots &0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0&0&0& \cdots & -1 \end{bmatrix} , \qquad {\bf u} = \begin{pmatrix} u_1 \\ \vdots \\ u_N \end{pmatrix} . \tag{1.3} \] To solve this system of equations, we will ignore terms with τ because they are small compared to f. This yields \[ {\bf T}_N {\bf v} = h^2 {\bf f} . \] The coefficient matrix TN plays a central role in all that follows, so we will examine it in some detail. First, we will compute its eigenvalues and eigenvectors. One can easily use trigonometric identities to confirm the following lemma.

Lemma: The eigenvalues of TN are \( \displaystyle \quad \lambda_j = 2 \left( 1 - \cos \frac{\pi j /\ell}{N+1} \right) . \quad \) The corresponding eigenvectors are \[ z_j (k) = \sqrt{\frac{2}{N+1}}\,\sin \left( \frac{jk\pi /\ell}{N+1} \right) , \qquad \| z_j \|_2 = 1. \] Let Z = [ z₁, z₂, … , zN ] be the orthogonal matrix whose columns are the eigenvectors, and Λ = diag(λ₁,λ₂, … , λN), so we can write \( \displaystyle \quad {\bf T}_N = {\bf Z}\,\Lambda \,{\bf Z}^{\mathrm T} . \)
Hence, the largest eigenvalue is \( \displaystyle \quad \lambda_N = 2 \left( 1 - \cos \frac{\pi/\ell}{N+1} \right) \quad \) and the smallest eigenvalue is λ₁, where for small i, \[ \lambda_i = 2 \left( 1 - \cos \frac{i\pi}{N+1} \right) \approx 2 \left( 1 - \left( 1 - \frac{i^2 \pi^2}{2 (N+1)^2} \right) \right) = \left( \frac{i\pi}{N+1} \right)^2 . \] Therefore, matrix TN is symmetric and positive definite with condition number \( \displaystyle \quad \lambda_N / \lambda_1 \approx \frac{4}{\pi^2} \left( N+1 \right)^2 \quad \) for large N. The eigenvectors are sinusoids with lowest frequency at j = 1 and highest at j = N.

Now we know enough to bound the error, i.e., the difference between the solution of \( \displaystyle \quad {\bf T}_N \hat{\bf u} = h^2 {\bf f} \quad \) and the true solution u of the Dirichlet boundary value problem. Taking norms, we get \[ \| u - \hat{u} \|_2 \le h^2 \| {\bf T}_N^{-1} \|2 \| \tau \|_2 \approx h^2 \left( \frac{N+1}{\pi} \right)^2 \| \tau \|_2 , \] which has an order of \[ O \left( \| \tau \|_2 \right) = O \left( h^2 \left\| \frac{{\text d}^4 u}{{\text d} x^4} \right\|_{\infty} \right) . \] Hence, the error \( \displaystyle \quad u - \hat{u} \quad \) goes to zero proportionally to h², provided that the solution is smooth enough (its fourth derivative is bounded on the interval [0, ℓ]).

We make a numerical experiment to verify the claim of this lemma.

Define parameters

l = 1;(* Length of the interval *) n = 10;(* Number of interior points *) h = l/(n + 1);(* Grid spacing *)
Define the function f(x)
f[x_] := Sin[Pi*x];(* Example function *)
Create grid points
x = Table[i*h, {i, 0, n + 1}]
{0, 1/11, 2/11, 3/11, 4/11, 5/11, 6/11, 7/11, 8/11, 9/11, 10/11, 1}
Compute fValues numerically at interior points
fValues = f /@ x[[2 ;; -2]](* Exclude boundary points *)
{Sin[\[Pi]/11], Sin[(2 \[Pi])/11], Cos[(5 \[Pi])/22], Cos[(3 \[Pi])/22], Cos[\[Pi]/22], Cos[\[Pi]/22], Cos[(3 \[Pi])/22], Cos[(5 \[Pi])/22], Sin[(2 \[Pi])/11], Sin[\[Pi]/11]}
Construct Tridiagonal Matrix Tn
Tn = SparseArray[ { {i_, i_} -> 2, {i_, j_} /; Abs[i - j] == 1 -> -1 }, {n, n}]; Normal[Tn] // MatrixForm
\( \displaystyle \quad \begin{pmatrix} 2&-1&0&0&0&0&0&0&0&0 \\ -1&2&-1&0&0&0&0&0&0&0 \\ 0&-1&2&-1&0&0&0&0&0&0 \\ 0&0&-1&2&-1&0&0&0&0&0 \\ 0&0&0&-1&2&-1&0&0&0&0 \\ 0&0&0&0&-1&2&-1&0&0&0 \\ 0&0&0&0&0&-1&2&-1&0&0 \\ 0&0&0&0&0&0&-1&2&-1&0 \\ 0&0&0&0&0&0&0&-1&2&-1 \\ 0&0&0&0&0&0&0&0&-1&2 \end{pmatrix} \)
Solve the system using rylov Method.
uSolution = LinearSolve[N[Tn], N[h^2 fValues], Method -> "Krylov"]
{0.0287403, 0.0551522, 0.0770961, 0.092794, 0.100974, 0.100974, \ 0.092794, 0.0770961, 0.0551522, 0.0287403}
Compute eigenvalues and eigenvectors
{eigenvalues, eigenvectors} = Eigensystem[N[Tn]]; Eigensystem[N[Tn]]
{{3.91899, 3.68251, 3.30972, 2.83083, 2.28463, 1.71537, 1.16917, 0.690279, 0.317493, 0.0810141}, {{0.120131, -0.23053, 0.322253, -0.387868, 0.422061, -0.422061, 0.387868, -0.322253, 0.23053, -0.120131}, {0.23053, -0.387868, 0.422061, -0.322253, 0.120131, 0.120131, -0.322253, 0.422061, -0.387868, 0.23053}, {-0.322253, 0.422061, -0.23053, -0.120131, 0.387868, -0.387868, 0.120131, 0.23053, -0.422061, 0.322253}, {0.387868, -0.322253, -0.120131, 0.422061, -0.23053, -0.23053, 0.422061, -0.120131, -0.322253, 0.387868}, {0.422061, -0.120131, -0.387868, 0.23053, 0.322253, -0.322253, -0.23053, 0.387868, 0.120131, -0.422061}, {0.422061, 0.120131, -0.387868, -0.23053, 0.322253, 0.322253, -0.23053, -0.387868, 0.120131, 0.422061}, {0.387868, 0.322253, -0.120131, -0.422061, -0.23053, 0.23053, 0.422061, 0.120131, -0.322253, -0.387868}, {0.322253, 0.422061, 0.23053, -0.120131, -0.387868, -0.387868, -0.120131, 0.23053, 0.422061, 0.322253}, {-0.23053, -0.387868, -0.422061, -0.322253, -0.120131, 0.120131, 0.322253, 0.422061, 0.387868, 0.23053}, {0.120131, 0.23053, 0.322253, 0.387868, 0.422061, 0.422061, 0.387868, 0.322253, 0.23053, 0.120131}}}
The largest eigenvalue is given by \( \displaystyle \quad 2 \left( 1 - \cos \left[ \frac{k\pi}{n+1} \right] \right) \quad \) where k = n = 10, so
lgEigenV = 2 (1 - Cos[(10 \[Pi])/11]) N[%] TrueQ[% == Max[eigenvalues]]
2 (1 + Cos[\[Pi]/11])
3.91899
True
The smallest eigenvalue is similarly \( \displaystyle \quad 2 \left( 1 - \cos \left[ \frac{k\pi}{n+1} \right] \right) \quad \) where k = 1, n = 10, so
smEigenV = 2 (1 - Cos[(k \[Pi])/11]) /. k -> 1
N[%]
2 (1 - Cos[\[Pi]/11])
0.0810141
Min[eigenvalues]
0.0810141
These two values of smallest eigenvalue of matrix Tn are not exactly the same because the exact formula with cosine function is evaluated differently from numerical calculation of eigenvalues.
TrueQ[smEigenV == Min[eigenvalues]]
False
Very small difference
smEigenV - Min[eigenvalues]
2.88658*10^-15
The conditional number for this matrix is
lgEigenV / smEigenV
48.3742
Of course, Mathematica has a dedicated command for evaluating the conditional number for matrix Tn through its singular values:
(SingularValueList[Tn][[1]] /SingularValueList[Tn][[-1]] ) // N
48.3742
ListLinePlot[Sort[eigenvalues], Mesh -> Full]
Distribution of eigenvalues.

From now on we will not distinguish between u and its approximation \( \displaystyle \quad \hat{u} \quad \) and so it will simplify notation by letting TNu = h²f. It turns out that the eigenvalues and eigenvectors of h−2TN also approximate the eigenvalues and eigenfunctions of the given boundary value problem (BVP). So we say that \( \displaystyle \quad \hat{\lambda}_i \quad \) is an eigenvalue and \( \displaystyle \quad \hat{z}_i (x) \quad \) is an eigenfunction of the given BVP if \[ - \frac{{\text d}^2 \hat{\bf z}_i (x)}{{\text d} x^2} = \hat{\lambda}_i \hat{\bf z}_i (x) , \qquad \hat{\bf z}_i (0) = \hat{\bf z}_i (\ell ) = 0 . \tag{1.4} \] Let us solve the Sturm--Liouville problem above for \( \displaystyle \quad \hat{\lambda}_i \quad \) and unknown function \( \displaystyle \quad \hat{z}_i (x) . \quad \) The general solution of Eq.(1.4) is a linear combination of trigonometric functions: \[ \hat{z}_i (x) = \alpha\,\sin \left( \sqrt{\hat{\lambda}_i}\, x \right) + \beta\,\cos \left( \sqrt{\hat{\lambda}_i}\, x \right) , \] for some real constants α and β. The boundary condition u(0) = 0 dictates that β must vanish. In order to satisfy another boundary condition at x = ℓ, we have to choose λ so that \( \displaystyle \quad \sqrt{\hat{\lambda}_i}\,\ell = i \pi \quad \) is multiple of π. Thus, \[ \hat{\lambda}_i = i^2 \pi^2 / \ell^2 \qquad\mbox{and} \qquad \hat{z}_i (x) = \sin \left( \frac{i \pi x}{\ell} \right) , \quad i =1,2,\ldots . \] Since an eigenfunction can be multiplied by any nonzero constant, we set α = 1 without any loss of generality. The eigenvector zi is precisely equal to the eigenfunction \( \displaystyle \quad \hat{z}_i (x) \quad \) evaluated at the sample points xj = jh when scaled by \( \quad \sqrt{\frac{2}{N+1}} . \quad \) When i is small, the eigenvalue is well approximated by \[ h^{-2} \lambda_i = \left( N+1 \right)^2 \left( 1 - \cos \left( \frac{i\pi / \ell}{N+1} \right) \right) = O \left( \frac{1}{(N+1)^2} \right) . \] Thus, we see there is a close correspondence between matrix operator TN or h−2TN and the second derivative operator \( \displaystyle \quad -\texttt{D}^2 = {\text d}^2 / {\text d}x^2 . \quad \) This correspondence will be the motivation for the design and analysis of later algorithms.

Note:    The boundary conditions for Poisson's equation may lead to non-symmetric analogue of matrix TN and/or positive semi-definite matrices.    ■

End of Example 1
   
Example 2:
   ■
End of Example 2
   
Example 3: Now we turn to Poisson's equation in two dimensions: \[ - \frac{\partial^2 u(x,y)}{\partial x^2} - \frac{\partial^2 u(x,y)}{\partial y^2} = f(x,y) \] on the unit (for simplicity) square:    0 < x, y < 1, with homogeneous boundary condition u = 0 on the boundary of the square. We discretize at the grid points in the square which are at ( xi, yj ) with
   ■
End of Example 3
   
Example 4: Now we turn to Poisson's equation in three dimensions: \[ - \frac{\partial^2 u(x,y,z)}{\partial x^2} - \frac{\partial^2 u(x,y, z)}{\partial y^2} - \frac{\partial^2 u(x,y, z)}{\partial z^2} = f(x,y,z) \] in the unit (for simplicity) cube:    0 < x, y, z < 1, with homogeneous boundary condition u = 0 on the boundary of the square. We discretize at the grid points in the square which are at ( xi, yj ) with
   ■
End of Example 4

There are two similar notations for inner product of two vectors x and y, one is widely used by mathematicians who separate vectors by a comma, and another one utilizing Dirac's notation when vectors are separated by vertical bar (we mostly use the latter):

\[ \langle \mathbf{x} , \mathbf{y} \rangle = \langle \mathbf{x} \mid \mathbf{y} \rangle = \sum_{i=1}^n x_i y_i = {\bf x} \bullet {\bf y} , \]
where x = (x₁, x₂, … , xn) and y = (y₁, y₂, … , yn) are n-dimensional vectors with real entries, and xy denotes a standard dot product. It does not matter whether vectors belong to Euclidean spacen or column space ℝn×1 or row space ℝ1×n---it is important that both vectors x and y are from the same finite dimensional vector space. When vector's entries are complex numbers, dot product xy must be replaced by inner product involving complex conjugates:
\[ \langle \mathbf{x} , \mathbf{y} \rangle = \langle \mathbf{x} \mid \mathbf{y} \rangle = \sum_{i=1}^n x_i^{\ast} y_i \qquad \left( x_i^{\ast} = \overline{x_i} \right) . \]

Plot3D[x^2 - x*y + y^2, {x, -4, 4}, {y, -4, 4}, PlotStyle -> Orange, AspectRatio -> 1] Plot3D[x^2 + x*y- y^2, {x, -4, 4}, {y, -4, 4}, PlotStyle -> Blue, AspectRatio -> 1]

Positive definite function.
       
Undefined function.

   
Example 5: We operate a shop near the sea and sell only two products: umbrellas and bathing suits. We want to maximize our daily profit by selling the "right" combination of the two products. Suppose the temperature ranges from 50 to 95 degrees, rising from winter to summer. Sales are affected by temperature and traffic. After extensive data collection over considerable time we are able to define the profit function as you see below.

We begin with the symbolic approach. First, define the profit function in a usual way as Revenue - Cost, and use variables x = umbrellas and y = bathing suits

Clear[x, y]; rev = (16 x + 22 y); cost = (x^2 + y^2 + 100); (Simplify[rev - cost]) // TraditionalForm
-x^2+16 x-y^2+22 y-100
Setting the above as a function named "profit"
profit[x_, y_] := -x^2 - y^2 + 16 x + 22 y - 100
To find the critical points, we take partial derivatives and equate the results to zero; solving resulting system of equations, we determine the solvion: need to sell 8 umbrellas and 11 bathing suits.
solution = Solve[{D[profit[x, y], x] == 0, D[profit[x, y], y] == 0}, {x, y}]
x → 8, y → 11
Extract optimal values for x and y and the optimal daily profit is $415.83
optimalX = x /. solution[[1]]
optimalY = y /. solution[[1]]
optimalProfit = profit[x, y] /. {x -> optimalX, y -> optimalY}
8.
11.
85.
Display the solution as a 3D triple to define a graphic point
{optimalX, optimalY, optimalProfit}
{8., 11., 85.}
Use Wolfram's Find Maximum built-in function for the same answer
fMax = FindMaximum[profit[x, y], {x, y}]
85., {x -> 8., y -> 11.}}
A different way to define the coordinates of the optimal point
ptMax = Flatten[Join[{fMax[[2, All, 2]], fMax[[1]]}]]
{8., 11., 85.}
Double checking our profit function given what we believe are optimum quantities
(rev - cost) /. fMax[[2]]
85.
3D Plot of the profit function over a range of x and y values; Optimal point showing at Red Point
plt3D = Plot3D[profit[x, y], {x, 0, 50}, {y, 0, 50}, PlotRange -> All, AxesLabel -> {"Umbrella Quantity", " Bathing Suit Quantity", "Profit"}, PlotLabel -> "Profit Function for Two-Product Optimization", Mesh -> 10, PlotStyle -> LightGreen]; pt3D1 = Graphics3D[{Red, PointSize -> Large, Point[ptMax]}]; Show[plt3D, pt3D1]
Profit function

Note that a "zoom in" to plot over a smaller range reflects the familiar parabolic quadratic form which will will confirm as negative definite further below.

Show[Plot3D[profit[x, y], {x, 0, 50}, {y, 0, 50}, PlotRange -> {{0, 20}, {0, 20}, {0, 100}}, AxesLabel -> {"Umbrella Quantity", " Bathing Suit Quantity", "Profit"}, FaceGrids -> {{{0, 0, 1}, {{8}, {11}}}, {{0, -1, 0}, {{8}, None}}, {{1, 0, 0}, {{11}, None}}}, FaceGridsStyle -> Directive[Blue, Dashed], Ticks -> {{8}, {11}, {1, 50, 100}}, PlotLabel -> "Profit Function for Two-Product Optimization", Mesh -> 10, PlotStyle -> LightGreen], pt3D1]
Zooming Profit function

Now we are ready to solve numerically this optimization probl;em and plot its solution. To implement matrix operations, we approach the problem by producing a table of values the profit function might produce with different quantities of our two products.

tbl2Prod = Table[profit[x, y], {x, 0, 50}, {y, 0, 50}]; Show[ListPlot3D[Transpose[tbl2Prod], PlotRange -> All, AxesLabel -> {"Umbrella Quantity", " Bathing Suit Quantity", "Profit"}, PlotLabel -> "Profit Function for Two-Product Optimization", Mesh -> 10, PlotStyle -> LightOrange], Graphics3D[{Red, PointSize -> Large, Point[ptMax]}]]
Note the dimensions of our matrix are driven by our choice of range and range increments specified in the table.
tbl2Prod // Dimensions
{51, 51}
In order to apply the Conjugate Gradient method we build Matrix A, which is denoted as matA. Note its Maximum is the same as our Analytical solutions optimal profit
matA = Transpose[tbl2Prod]; Max[matA]
85
The optimum is found in Row 12, Column 9 of the matrix
Position[matA, Max[matA]]
( 12, 9 )
Note Maximum highlighted in red below
Map[If[# == 85, Style[#, {Bold, Red}], #] &, matA, {2}]
See outpot in the Wolfram site
See matrix A in Mathematica notebook at ??????????????????????

Here is the code to execute the conjugate gradient algorithm in Wolfram language:

conjugateGradient[A_, b_, x0_, tol_ : $MachineEpsilon] := Module[ {x, residual, searchDirection, oldResidNorm, AsearchDirection, stepSize, newResidNorm, norm}, (* Initialize the solution vector *) x = x0; (* Initialize residual vector *) residual = b - A . x; (* Initialize search direction vector *) searchDirection = residual; (* Norm function to compute the Euclidean norm *) norm[v_] := Sqrt[v . v]; (* Compute initial squared residual norm *) oldResidNorm = norm[residual]; (* Iterate until convergence *) While[oldResidNorm > tol, AsearchDirection = A . searchDirection; stepSize = oldResidNorm^2/(searchDirection . AsearchDirection); (* Update solution *) x = x + stepSize searchDirection; (* Update residual *) residual = residual - stepSize AsearchDirection; newResidNorm = norm[residual]; (* Update search direction vector *) searchDirection = residual + (newResidNorm/oldResidNorm)^2 searchDirection; (* Update squared residual norm for next iteration *) oldResidNorm = newResidNorm; ]; x ]
   ■
End of Example 5
    A self-adjoint matrix is positive definite if and only if all its eigenvalues are positive. Then the condition number (which is ∥A−1∥ ∥A∥ for any consistent matrix norm) of A is given by \( \displaystyle \quad \kappa = \frac{\lambda_{\max}}{\lambda_{\min}} , \quad \) where λmax and λmin are the largest and smallest eigenvalue of A. Unfortunately, mathematicians are not consistent: a linear operator is called positive if all its eigenvalues are positive. This definition from functional analysis does not require matrix A (as operator) to be self-adjoint (A = A).

Throughout this section we assume that the matrix A ∈ ℝn×n is symmetric positive definite, that is, A = A and ⟨xA x⟩ > 0 for x ≠ 0.

When A is real symmetric (A = A = AT), we have ⟨x , A x⟩ = ⟨A xx⟩, which justifies the Dirac notation: ⟨xAx⟩ for ⟨xA x⟩.

For any real symmetric positive definite square matrix A (A = A = AT), we can define an A-inner product
\begin{equation} \label{EqCG.1} \langle \mathbf{x} \mid \mathbf{y} \rangle_A = \langle \mathbf{x} \mid {\bf A}\,\mathbf{y} \rangle = \langle {\bf A}^{\ast}\mathbf{x} \mid \mathbf{y} \rangle = \langle {\bf A}\,\mathbf{x} \mid \mathbf{y} \rangle \end{equation}
that satisfies regular inner product laws. Two vectors x and y are called A-orthogonal or A-conjugate if \[ \langle \mathbf{x} \mid \mathbf{y} \rangle_A = \langle \mathbf{x} \mid {\bf A}\,\mathbf{y} \rangle = 0 , \] which we denote by xA y.
To each A-inner product corresponds the A-norm that is usually referred to as an energy norm
\begin{equation} \label{EqCG.2} \| \mathbf{x} \|_A = +\sqrt{ \langle \mathbf{x} \mid \mathbf{x} \rangle_A } = + \sqrt{\langle \mathbf{x} \mid {\bf A}\,\mathbf{x} \rangle} = + \sqrt{\langle {\bf A}^{\ast} \mathbf{x} \mid \mathbf{x} \rangle} . \end{equation}
When A = I, the identity matrix, we get the usual Euclidean inner product and norm in ℝn or any isomorphic to it space such as space of column vectors ℝn×1 or ℝ1×n row vectors---it does not matter which of these three spaces are in use.
Let A be a symmetric real square matrix. A set of vectors S = {v₁, v₂, … , vn} is called A-orthogonal or A-conjugate with respect to matrix A if all A-inner products \[ \left\langle {\bf v}_i \mid \mathbf{A}\,{\bf v}_j \right\rangle = 0 \qquad \forall i \ne j . \]

This definition does not require matrix A to be positive definite; however, in our further applications it will be necessarily. If A is zero matrix (A = 0), then any set of vectors is conjugate. If A is identity matrix (A = I), then conjugacy is equivalent to the usual notion of orthogonality.

Lemma 1: Let A be symmetric and positive definite real square matrix. If set of vectors S = {v₁, v₂, … , vn} is A-conjugate, then this set is linearly independent.
We prove by contradiction If one vector from set S is a linear combination of other vectors \[ \mathbf{v}_n = \alpha_1 \mathbf{v}_1 + \alpha_2 \mathbf{v}_2 + \cdots + \alpha_{n-1} \mathbf{v}_{n-1} \] with at least one nonzero coefficient, say α₁, then \[ 0 < \left\langle \mathbf{v}_n \mid \mathbf{A}\,\mathbf{v}_n \right\rangle = \alpha_1 \left\langle \mathbf{v}_n \mid \mathbf{A}\,\mathbf{v}_1 \right\rangle + \alpha_2 \left\langle \mathbf{v}_n \mid \mathbf{A}\,\mathbf{v}_2 \right\rangle + \cdots + \alpha_{n-1} \left\langle \mathbf{v}_n \mid \mathbf{A}\,\mathbf{v}_{n-1} \right\rangle = 0. \]
   
Example 6: Positive definite matrix that s not symmetric + complex
   ■
End of Example 6
   

Motivation

The conjugate gradient method is the algorithm that chooses search directions by A-orthogonalizing the residual against all previous search directions. Actually, it is based on two fundamental ideas. The starting point is to replace the linear algebra problem of solving system of linear equations A x = b by equivalent optimization problem; so solution x of linear system is a unique minimum of a special quadratic function.

Although there exist infinite many functions having a minimizer x that satisfies the linear equation A x = b. it is convenient to consider a quadratic function mainly because its gradient is equal to A xb.

Theorem 1: The vector x is a solution to the positive definite linear system A x = b if and only if x produces the minimal value of the quadratic function
\begin{equation} \label{EqCG.3} \phi ({\bf u}) = \frac{1}{2}\,\left\langle {\bf u} \mid {\bf A}\,{\bf u} \right\rangle - \left\langle {\bf u} \mid {\bf b} \right\rangle . \end{equation}
The minimum value of function ϕ is \( \displaystyle \quad \phi \left( {\bf A}^{-1} {\bf b} \right) = -\frac{1}{2}\,\langle {\bf b} \mid {\bf A}^{-1} {\bf b} . \rangle . \)
Let x and v0 be fixed column vectors of size n and let λ be a real number. We have \begin{align*} \phi \left( \mathbf{x} + \lambda\, \mathbf{v} \right) &= \frac{1}{2}\,\left\langle \mathbf{x} + \lambda\, \mathbf{v} \mid {\bf A} \left( \mathbf{x} + \lambda\, \mathbf{v} \right) \right\rangle - \left\langle \mathbf{x} + \lambda\, \mathbf{v} \mid {\bf b} \right\rangle \\ &= \frac{1}{2}\,\left\langle \mathbf{x} \mid {\bf A} \,\mathbf{x} \right\rangle + \frac{\lambda}{2}\,\left\langle \mathbf{v} \mid {\bf A} \,\mathbf{x} \right\rangle + \frac{\lambda}{2}\,\left\langle \mathbf{x} \mid {\bf A} \,\mathbf{v} \right\rangle + \frac{\lambda^2}{2}\,\left\langle \mathbf{v} \mid {\bf A} \,\mathbf{v} \right\rangle \\ & \quad - \left\langle \mathbf{x} \mid \mathbf{b} \right\rangle - \left\langle \mathbf{v} \mid \mathbf{b} \right\rangle \\ &= \frac{1}{2}\,\left\langle \mathbf{x} \mid {\bf A} \,\mathbf{x} \right\rangle - \left\langle \mathbf{x} \mid \mathbf{b} \right\rangle \\ & \quad + \lambda \left\langle \mathbf{v} \mid {\bf A} \,\mathbf{x} \right\rangle - \lambda \left\langle \mathbf{v} \mid \mathbf{b} \right\rangle + \frac{\lambda^2}{2}\,\left\langle \mathbf{v} \mid {\bf A} \,\mathbf{v} \right\rangle . \end{align*} Hence, \[ \phi \left( \mathbf{x} + \lambda\, \mathbf{v} \right) = \phi \left( \mathbf{x} \right) - \lambda \left\langle \mathbf{v} \mid \mathbf{b} - {\bf A}\,\mathbf{x} \right\rangle + \frac{\lambda^2}{2}\,\left\langle \mathbf{v} \mid {\bf A} \,\mathbf{v} \right\rangle . \tag{E1.1} \] With x and v fixed we can define the quadratic function φ in λ by \[ \varphi (\lambda ) = \phi \left( \mathbf{x} + \lambda\, \mathbf{v} \right) . \] Then φ attains a minimal value when its derivative is zero because its λ² coefficient, ⟨v, A v⟩ is positive. Since \[ \varphi' (\lambda ) = \left\langle \mathbf{v} \mid \mathbf{b} - {\bf A}\,\mathbf{x} \right\rangle + \lambda \left\langle \mathbf{v} \mid {\bf A} \,\mathbf{v} \right\rangle , \] the minimum occurs when λ = λ* with \[ \lambda^{\ast} = \frac{\left\langle \mathbf{v} \mid \mathbf{b} - {\bf A}\,\mathbf{x} \right\rangle }{\left\langle \mathbf{v} \mid {\bf A} \,\mathbf{v} \right\rangle} = \frac{\left\langle \mathbf{v} \mid \mathbf{b} \right\rangle - \left\langle \mathbf{v} \mid \mathbf{x} \right\rangle_A}{\| \mathbf{v} \|^2_{A}}. \] From Eq.(E1.1), we get \begin{align*} \varphi \left( \lambda^{\ast} \right) &= \phi \left( \mathbf{x} + \lambda^{\ast} \mathbf{v} \right) \\ &= \phi \left( \mathbf{x} \right) - \lambda^{\ast} \left\langle \mathbf{v} \mid \mathbf{b} - {\bf A}\,\mathbf{x} \right\rangle + \frac{\lambda^{\ast}}{2} \left\langle \mathbf{v} \mid {\bf A} \,\mathbf{v} \right\rangle \\ &= \phi \left( \mathbf{x} \right) - \frac{\left\langle \mathbf{v} \mid \mathbf{b} - {\bf A}\,\mathbf{x} \right\rangle }{\left\langle \mathbf{v} \mid {\bf A} \,\mathbf{v} \right\rangle} \left\langle \mathbf{v} \mid \mathbf{b} - {\bf A}\,\mathbf{x} \right\rangle + \left( \frac{\left\langle \mathbf{v} \mid \mathbf{b} - {\bf A}\,\mathbf{x} \right\rangle }{\left\langle \mathbf{v} \mid {\bf A} \,\mathbf{v} \right\rangle} \right)^2 \left\langle \mathbf{v} \mid {\bf A} \,\mathbf{v} \right\rangle /2 \\ &= \phi \left( \mathbf{x} \right) - \frac{\left\langle \mathbf{v} \mid \mathbf{b} - {\bf A}\,\mathbf{x} \right\rangle^2 }{2\,\| \mathbf{v} \|^2_{A}} . \end{align*} So for any vector v ≠ 0, we have     ϕ(x + λ*v) < ϕ(x)     unless ⟨vbA x⟩ = 0, in which case ϕ(x + λ*v) = ϕ(x).

Suppose x* satisfies A x* = b. Then ⟨vbA x*⟩ = 0 for any vector v, and ϕ(x) cannot be made any smaller than ϕ(x*). Thus, x* minimizes ϕ.

On the other hand, suppose that x* is a vector that minimizes ϕ. Then for any vector v, we have ϕ(x* + λ*v) ≥ ϕ(x*). Thus, ⟨vbA x*⟩ = 0. This implies that bA x* = 0 and consequently, that A x* = b.

   
Example 7: We consider the following function \[ f\left( x_1 , x_2 \right) = \left( x_1 -1 \right)^2 + \left( x_2 -1 \right)^2 + \left( x_1 -1 \right)^2 \left( x_1 -2 \right)^2 + \left( x_2 -1 \right)^2 \left( x_2 -3 \right)^2 . \] It has the global minimum at x₁ = 1, x₂ = 1, but its gradient \[ \nabla\,f\left( \right) = 2 \left( x_1 -1 + (x_1 -2)^2 (x_1 -1) + (x_1 -1)^2 (x_1 -2) , x_2 -1 + (x_2 -3)^2 (x_2 -1) + (x_2 -1)^2 (x_2 - 3) \right) \] has nothing in common with any linear system.
f[x1_, x2_] = (x1 - 1)^2 + (x2 - 1)^2 + (x1 - 2)^2 *(x1 - 1)^2 + (x2 - 1)^2 *(x2 - 3)^2 ; Grad[f[x1, x2], {x1, x2}]
{2 (-1 + x1) + 2 (-2 + x1)^2 (-1 + x1) + 2 (-2 + x1) (-1 + x1)^2, 2 (-1 + x2) + 2 (-3 + x2)^2 (-1 + x2) + 2 (-3 + x2) (-1 + x2)^2}
We plot the function
Plot3D[f[x, y], {x, -6, 10}, {y, -6, 10}]
   ■
End of Example 7
    In many applications, the quadratic function
\[ \phi (\mathbf{x}) = \frac{1}{2} \sum_{i=1}^n \sum_{j=1}^n a_{i,j} x_i x_j - \sum_{i=1}^n x_i b_i , \qquad \mathbf{x} \in \mathbb{R}^n , \]
resembles the energy of a mechanical system; for instance, the kinetic energy of n particles with masses mi is expressed as the quadratic function of particle's coordinates: \( \displaystyle \quad \frac{1}{2}\,\sum_{i=1}^n m_i x_i^2 . \quad \) This function ϕ maybe considered as its generalization similar to Lyapunov's function in the theory of dynamic systems. The minimizer x of the function ϕ is given as the point where the gradient of the function is equal to zero. Direct calculations give
\[ \nabla\,\phi \left( {\bf x} \right) = \mathbf{A}\,\mathbf{x} - \mathbf{b} \quad \Longrightarrow \quad \nabla\,\phi \left( {\bf x}^{\ast} \right) = \mathbf{A}\,\mathbf{x}^{\ast} - \mathbf{b} = 0 , \]
and its residual vector is
\[ \mathbf{r} = \mathbf{b} - {\bf A}\,{\bf x} = - \nabla \phi = - \left( \frac{\partial \phi}{\partial x_1} \left( \mathbf{x} \right) , \frac{\partial \phi}{\partial x_2} \left( \mathbf{x} \right) , \ldots , \frac{\partial \phi}{\partial x_n} \left( \mathbf{x} \right) \right) . \]
So the solution of the matrix/vector equation A x = b corresponds to the lowest energy state, or equilibrium, of the system described by function ϕ.

Also, this linear algebra problem is equivalent of minimizing the A-norm of the error e = x* − x:
\begin{align*} \phi ({\bf x}) &= \frac{1}{2} \left\langle \mathbf{x} \mid {\bf A}\,\mathbf{x} \right\rangle - \left\langle \mathbf{x} \mid \mathbf{b} \right\rangle = \frac{1}{2} \left\langle \mathbf{x} \mid {\bf A}\,\mathbf{x} \right\rangle - \left\langle \mathbf{x} \mid {\bf A}\,\mathbf{x}^{\ast} \right\rangle \\ &= \frac{1}{2} \left\langle \mathbf{x} \mid {\bf A}\,\mathbf{x} \right\rangle - \left\langle \mathbf{x} \mid {\bf A}\,\mathbf{x}^{\ast} \right\rangle + \frac{1}{2} \left\langle \mathbf{x}^{\ast} \mid {\bf A}\,\mathbf{x}^{\ast} \right\rangle - \frac{1}{2} \left\langle \mathbf{x}^{\ast} \mid {\bf A}\,\mathbf{x}^{\ast} \right\rangle \\ &= \frac{1}{2} \left\langle \left( \mathbf{x} - \mathbf{x}^{\ast} \right) \mid {\bf A} \left( \mathbf{x} - \mathbf{x}^{\ast} \right) \right\rangle - \frac{1}{2} \left\langle \mathbf{x}^{\ast} \mid {\bf A}\,\mathbf{x}^{\ast} \right\rangle , \end{align*}
where x* = A−1b is the true solution of linear equation A x = b. Hence,
\begin{equation} \label{EqCG.4} \phi ({\bf x}) = \frac{1}{2} \left\| {\bf x}^{\ast} - {\bf x} \right\|_A^2 - \frac{1}{2} \left\| {\bf x}^{\ast}\right\|_A^2 . \end{equation}
The last term in the identity above is independent of x, so ϕ is minimized exactly when the error is minimized. Since A is positive definite, this term \( \displaystyle \quad \frac{1}{2} \left\| {\bf x} - {\bf x}^{\ast} \right\|_A^2 = \frac{1}{2} \left\langle \left( \mathbf{x} - \mathbf{x}^{\ast} \right) \mid {\bf A} \left( \mathbf{x} - \mathbf{x}^{\ast} \right) \right\rangle \quad \) is positive unless xx* = 0.

The formula \eqref{EqCG.3} for ϕ shows that the same x that minimizes ϕ(x) also serves as the solution to the linear system of equations A x = b. The uniques of our solution is guaranteed by the the matrix that is symmetric and positive definite.

Once we reduced the linear algebra problem A x = b to equivalent unconstrained minimization of function ϕ, we turn our attention to more sophisticated approach how to determine the minimizer x of function ϕ. In order to achieve this objective, we start with an assumption that a list or array (so their order matters) of n of nonzero mutually A-conjugate vectors is known, S = [d(0), d(1), … , d(n-1)]. We postpone discussion when this list may contain less elements than the dimension of the problem to next section. If vectors in list S satisfy the A-orthogonality condition

\[ \left\langle \mathbf{d}^{(k)} \mid \mathbf{A}\,\mathbf{d}^{(i)} \right\rangle = \left\langle \mathbf{d}^{(k)} \mid \mathbf{d}^{(i)} \right\rangle_A = 0 \qquad \mbox{when}\quad k\ne i . \]
then set S is said to be A-orthogonal. We also abbreviate their mutual A-orthogonality as \[ \mathbf{d}^{(0)} \perp_A \mathbf{d}^{(1)} \perp_A \cdots \perp_A \mathbf{d}^{(n-1)} \] Using orthogonality of list S, we can expand the true solution x of the system of equations A x = b as
\begin{equation} \label{EqCG.5} \mathbf{x}^{\ast} = \mathbf{A}^{-1} \mathbf{b} = \alpha_0 \mathbf{d}^{(0)} +\alpha_1 \mathbf{d}^{(1)} + \cdots + \alpha_{n-1} \mathbf{d}^{(n-1)} . \end{equation}
To determine the coefficients αk (k = 0, 1, … , n−1) of this expansion, we multiply both sides of Eq.\eqref{EqCG.5} by A d(k) for some particular index k and take inner product. This yields
\[ \left\langle \mathbf{d}^{(k)} \mid \mathbf{A}\,\mathbf{x}^{\ast} \right\rangle = \sum_{i=0}^{n-1} \alpha_i \left\langle \mathbf{d}^{(k)} \mid \mathbf{A}\,\mathbf{d}^{(i)} \right\rangle = \alpha_k \left\langle \mathbf{d}^{(k)} \mid \mathbf{A}\,\mathbf{d}^{(k)} \right\rangle \]
because vectors from array S are mutually conjugate. The conjugancy of vectors in list S allows us to find the values of coefficients
\[ \alpha_i = \frac{\left\langle \mathbf{d}^{(i)} \mid \mathbf{A}\,\mathbf{x}^{\ast} \right\rangle}{\left\langle \mathbf{d}^{(i)} \mid \mathbf{A}\,\mathbf{d}^{(i)} \right\rangle} = \frac{\left\langle \mathbf{d}^{(i)} \mid \mathbf{b} \right\rangle}{\left\| {\bf d}^{(i)} \right\|^2_A} , \qquad i=0,1,2,\ldots , n-1. \]
Hence, we rewrite equation \eqref{EqCG.5} in explicit form:
\[ \mathbf{x}^{\ast} = \mathbf{A}^{-1} \mathbf{b} = \sum_{i=0}^{n-1} \frac{\left\langle \mathbf{d}^{(i)} \mid \mathbf{b} \right\rangle}{\left\| {\bf d}^{(i)} \right\|^2_A} \, \mathbf{d}^{(i)} \]
that does not contain unknown vector x. Disadvantage of evaluation of x according to Eq.\eqref{EqCG.5} becomes clear: it requires utilization of a lot of memory, especially when n is large. In practice, vectors in array S are not known in advance and should be determined according to some procedure, for instance, conjugate Gram--Schmidt algorithm. This leads to an ill-posed problem of implementing full-history procedure by keeping in memory all previous steps (the old search vectors). A remedy to avoid numerical instability of maintaining conjugancy of vectors in list S is to transfer summation problem into an iteration problem. One step iteration scheme was proposed in 1952 by Hestenes & Steifel:
\begin{equation} \label{EqCG.6} \mathbf{x}^{(i+1)} = \mathbf{x}^{(i)} + \alpha_i {\bf d}^{(i)} , \qquad i=0,1,\ldots , n-1; \end{equation}
where x(0) is an arbitrary vector, the scalar αi is the step size, and d(k) are referred to as the search direction or descent direction vector. For residuals r(k) = bA x(k), we have a similar recurrence
\[ \mathbf{r}^{(i+1)} = \mathbf{r}^{(i)} - \alpha_i {\bf A}\,{\bf d}^{(i)} , \qquad i=0,1,\ldots , n-1. \]
At each iterative step, the only one previous information is used to define the next iterative value according to formula \eqref{EqCG.6}. Therefore, at the end of recursive procedure, we get the required solution x(n) = x (subject that all calculations are performed in exact arithmetics, no roundoff).

As it is seen from one-step nonstationary inhomogeneous recurrence \eqref{EqCG.6}, the conjugate gradient algorithm generates a sequences of vectors, {x(k)}, that always converges (when all calculations are done with exact arithmetic) to the true solution x* of the linear system A x = b. Your question is anticipated: why recurrence \eqref{EqCG.6} always converges? The answer follows from observation that list S form a basis in n dimensional vector space ℝn×1x.

However, recurrence \eqref{EqCG.6} contains two unknown ingradients: parameter αi and vector d(i), which is called the search direction or descent direction vector. Therefore, the conjugate gradient method involves essentially two phases: choosing a descent direction and picking up a point along that direction (which is identified by parameter αi).

Theorem 2: Let A ∈ ℝn×n be positive definite, let S = [d(0), d(1), … , d(n-1)] be an A-orthogonal list of nonzero vectors. Then step size in formula \eqref{EqCG.6} is \[ \alpha_i = \frac{\left\langle \mathbf{d}^{(i)} \mid {\bf b} - {\bf A}\, \mathbf{x}^{(i)} \right\rangle}{\left\langle \mathbf{d}^{(i)} \mid {\bf A}\, \mathbf{d}^{(i)} \right\rangle} = \frac{\left\langle \mathbf{d}^{(i)} \mid \mathbf{r}^{(i)} \right\rangle}{\left\| \mathbf{d}^{(i)} \right\|_A^2} , \qquad i = 1,2,\ldots , n . \]
Step size αi must be chosen in such a way to ensure that the energy norm of error ∥eiA is minimized along the descent direction, which is equivalent to determine a minimizer of function ϕ. If we fix vectors x = x(k) and d = d(k), this is equivalent minomization of function φ(α) = ϕ(x + αd) along one dimensional line. Taking derivative, we obtain \[ \frac{\text d}{{\text d}\alpha}\,\varphi (\alpha ) = \nabla \phi (\mathbf{x} + \alpha\mathbf{d}) \bullet \frac{\text d}{{\text d}\alpha} \left( \mathbf{x} + \alpha\mathbf{d} \right) = \nabla \phi (\mathbf{x} + \alpha\mathbf{d}) \bullet \mathbf{d} . \] We know from previous observations that \[ \nabla \phi (\mathbf{x} ) = - \mathbf{r} = {\bf A}\, \mathbf{x} - \mathbf{b} . \] . Hence, \[ \nabla \phi (\mathbf{x} + \alpha \mathbf{d}) = {\bf A}\left( \mathbf{x} + \alpha \mathbf{d} \right) - \mathbf{b} = - \mathbf{r} + \alpha {\bf A} \,\mathbf{d} , \] where r = b &mimus; A x is the residual vector. Therefore, \[ \varphi' \left( \alpha \right) = - \left\langle \mathbf{r}^{(i)} \mid \mathbf{d}^{(i)} \right\rangle + \alpha \left\langle \mathbf{d}^{(i)} \mid {\bf A}\,\mathbf{d}^{(i)} \right\rangle = 0 . \] This leads to required formula.

The main challenge in determination of search vectors d(i) is how to maintain conjugancy of vectors in list S automatically without keeping in memory all previous step information (which is needed to avoid ill-posed problem).

Theorem 3: Let A ∈ ℝn×n be positive definite, let x ∈ ℝn×1, and let S be a subspace of ℝn×1. Then there is a unique s ∈ ℝn×1 such that \[ \| \mathbf{x} - \mathbf{s} \|_A = \min_{\mathbf{u} \in S} \| \mathbf{x} - \mathbf{u} \|_A . \] The vector s is A-orthogonal to S, so sA u for every uS.
Let β = [v₁, v₂, … , vn] be an A-orthogonal basis of the vector space ℝn×1 such that first k vectors of it [v₁, v₂, … , vk] constitute a basis of vector subspace S ⊆ ℝn×1. So subspace S and its complement S are spanned on disjoint subsets: \[ S = \mbox{span} \left[\mathbf{v}_1 , \mathbf{v}_2 , \ldots , \mathbf{v}_k \right] , \quad S^{\perp} = \mbox{span}\left[\mathbf{v}_{k+1} , \mathbf{v}_{k+2} , \ldots , \mathbf{v}_n \right] . \] Such A-orthogonal basis exists because any basis (linear independent list of vectors) can be transformed into A-conjugate basis using conjugate Gram--Schidt process.

Let x ∈ ℝn×1 be an arbitrary column vector. Then it can be expressed in unique way as a linear combination of elements from basis β: \[ \mathbf{x} = c_1 \mathbf{v}_1 + c_2 \mathbf{v}_2 + \cdots + c_k \mathbf{v}_k + c_{k+1} \mathbf{v}_{k+1} + \cdots + c_n \mathbf{v}_n . \] Let \[ \mathbf{s} = c_1 \mathbf{v}_1 + c_2 \mathbf{v}_2 + \cdots + c_k \mathbf{v}_k \in S \] and \[ \mathbf{s}^{\perp} = c_{k+1} \mathbf{v}_{k+1} + c_{k+2} \mathbf{v}_{k+2} + \cdots + c_n \mathbf{v}_n \in S^{\perp} . \] Therefore, any vector x can be uqiquely written as x = s + s so that sS and sS.

To see that this decomposition is unqie, let us assume the opposite: x = r + rwith rS and rS. Then the equation x = s + s = r + r implies that z = sr = rs. Thus, zSS = {0} and we conclude that z = 0, which means s = r and s = r.

The unique elements s and s are called the A-orthogonal projections of x into S and S, respectively. When vectors s and s are A-orthogonal, then the conjugate Pythagorian theorem holds: \[ \left\| \mathbf{s} + \mathbf{s}^{\perp} \right\|^2_A = \| \mathbf{s} \|_A^2 + \left\| \mathbf{s}^{\perp} \right\|^2_A . \] Indeed \[ \[ \left\| \mathbf{s} + \mathbf{s}^{\perp} \right\|^2_A = \left\langle \mathbf{s} + \mathbf{s}^{\perp} \mid \mathbf{s} + \mathbf{s}^{\perp} \right\rangle_A = \left\langle \mathbf{s} \mid \mathbf{s} \right\rangle_A + \left\langle \mathbf{s}^{\perp} \mid \mathbf{s}^{\perp} \right\rangle_A , \] because ⟨ssA = ⟨ssA = 0.

Now suppose that there exists another wS such that xwS. Then decomposition x = w + (xw) would violate the uniquenes of decomposition of x into two parts, one from S and another from its complement, S. Then vector s satisfies conditions of this theorem. Indeed, since xs = (xu) + (us), where xuS and usS, we have \[ \left\| \mathbf{x} - \mathbf{s} \right\|_A^2 = \left\| \mathbf{x} - \mathbf{u} \right\|_A^2 + \left\| \mathbf{u} - \mathbf{s} \right\|_A^2 . \] As s runs through S, the term ¶lell;xu∥² remains constant, while the term ¶lell;us∥² variues but remains strictly positive wzxwpt that it equals zero when u = s. Thus, ¶lell;xs∥² is minimized when and only when s = u.

   
Example 8:
   ■
End of Example 8

A naïve approach is to choose the n coordinate directions in ℝn as search directions, that is, d(k) = ek, where ek is the kth standard orthonormal basis vector for k = 1, 2, … , n. Then this approach (in n steps) provides a one step in the Gauss–Seidel method.

An iterative method to choose the residuals r(k) = bA x(k) = −∇ϕ(x(k)) as the search directions is called the method of steepest descent. However, the resulting iterative method may exhibit a poor performance for many functions ϕ whose graph is a long, narrow valley. In particular, the local minimization point x(k+1) may situate at a point where that line is tangent to the level sets of ϕ. Consequently, the next search direction r(k+1), being orthogonal to the level sets at x(k+1) , must be orthogonal to the old search direction r(k). This geometry often causes the iterative method to take many short, inefficient switchbacks down to the valley floor, when a better choice of search directions could lead to a more direct descent.

The relationship between the linear system A x = b and the minimization of function ϕ(x) inspires a geometric interpretation as well as a physical one. From multivariable calculus, it follows that the residual vector r points in the direction of the steepest descent of ϕ, as illustrated for the case n = 2 in the following figure.

cp = ContourPlot[x^2 - x*y + y^2, {x, -1, 1}, {y, -1, 1}, ContourStyle -> Opacity[0.5]] center = Graphics[{White, Disk[{0, 0}, 0.03]}] ar = Graphics[{Red, Thickness[0.01], Arrowheads[0.08], Arrow[{{1, -0.55}, {0.23, 0.0}}]}] Show[cp, center, ar]
Level sets of a function ϕ and search direction for its minimization.

From Eq.\eqref{EqCG.6}, it follows that

\[ \mathbf{x}^{(i+1)} = \mathbf{x}^{(0)} + \alpha_0 \mathbf{d}^{(0)} + \alpha_1 \mathbf{d}^{(1)} + \cdots + \alpha_i \mathbf{d}^{(i)} = \mathbf{x}^{(0)} + \sum_{j=0}^i \alpha_j \mathbf{d}^{(j)} . \]
If we denote by Si the vector space spanned on first i+1 search vectors, {d(0), d(1), … , d(i)}, then we see that every next iterate vector x(i) must be A-orthogonal to the i-dimensional affine space \( \displaystyle \quad \mathbf{x}^{(0)} + S_{i-1} \ \) because d(i) is conjugate to it.

The energy function

\[ \phi ({\bf x}) = \frac{1}{2} \left\| {\bf x}^{\ast} - {\bf x} \right\|_A^2 - \frac{1}{2} \left\| {\bf x}^{\ast}\right\|_A^2 \tag{4} \]
tells us that its minimization is the same as minimization of the energy norm of the error. So at every step, we need to solve one-dimensional minimization problem along the line spanned on the search vector d(i) according to Eq.\eqref{EqCG.6}. What makes the conjugate gradient method outstanding is that it remembers information from past steps so that it minimizes a higher dimensional problem by minmizing one dimensional problem over vector space spanned on properly chosen the descent vector.

To identify the search directions, we need a definition.

A vector x(k) is said to be optimal with respect to a direction d ≠ 0 if \[ \phi \left( {\bf x}^{(k)} \right) \leqslant \phi \left( {\bf x}^{(k)} + \lambda\,{\bf d}\right) , \qquad \forall \lambda \in \mathbb{R} . \] If x(k) is optimal with respect to any direction in a vector space V, we say that x(k) is optimal with respect to V.
From the definition of optimality, it turns out that d must be orthogonal to the residual r(k). Indeed, from the definition, we conclude that ϕ(x(k) + λd) admits a local minimum along d for λ = 0, and thus the partial derivative of ϕ with respect to λ must vanish at λ = 0. Since
\[ \frac{\partial \phi}{\partial\lambda} \left( {\bf x}^{(k)} + \lambda\,{\bf d} \right) = \left\langle {\bf d} \mid {\bf A}\,{\bf x}^{(k)} - {\bf b}\right\rangle + \lambda \left\langle {\bf d} \mid {\bf A}\,{\bf d} \right\rangle , \]
we therefore have
\[ \left. \frac{\partial \phi}{\partial\lambda} \left( {\bf x}^{(k)} \right) \right\vert_{\lambda =0} = 0 \qquad \iff \qquad \left\langle {\bf d} \mid {\bf r}^{(k)} \right\rangle = 0 , \]
that is, dr(k). Suppose
\[ {\bf x}^{(k+1)} = {\bf x}^{(k)} + {\bf q} \]
for some vector q. Let us assume that x(k) is optimal with respect to a direction d (thus, r(k)d). Let us impose that x(k+1) is still optimal with respect to d, that is, r(k+1)d. We obtain
\[ 0 = \left\langle {\bf d} \mid {\bf r}^{(k+1)} \right\rangle = \left\langle {\bf d} \mid {\bf r}^{(k)} - {\bf A}\,{\bf q} \right\rangle = - \left\langle {\bf d} \mid {\bf A}\,{\bf q} \right\rangle . \]
Therefor, in order to preserve optimality between successive iterates, the descent directions must be mutually A-orthogonal or A-conjugate, i.e.,
\[ \left\langle {\bf d} \mid {\bf A}\,{\bf q} \right\rangle = \left\langle {\bf d} \mid {\bf q} \right\rangle_A = 0 . \]
A method employing A-conjugate descent directions is called conjugate. The next step is how to generate automatically a sequence of conjugate directions. The conjugate gradient method of Hestenes and Stiefel chooses the search directions { d(k) } during the iterative process so that the residual vectors { r(k) } are mutually orthogonal. To achieve this, it is appropriate to apply the conjugate Gram--Schmidt process to the next residual vector by subtracting the orthogonal projection on the line generated by the descent vector:
\begin{equation} \label{EqCG.7} \mathbf{d}^{(i+1)} = \mathbf{r}^{(i+1)} - \beta_i {\bf d}^{(i)} , \qquad i=0,1,\ldots , n-1; \end{equation}
where the Gram--Schmidt coefficient βi is chosen to ensure s that the new direction d(k+1) is A-conjugate with the previous one,
\begin{equation} \label{EqCG.8} \beta_i = \frac{\left\langle \mathbf{d}^{(i)} \mid {\bf A}\,\mathbf{r}^{(i+1)} \right\rangle}{\left\langle \mathbf{d}^{(i)} \mid {\bf A}\,\mathbf{d}^{(i)} \right\rangle} = \frac{\left\langle \mathbf{d}^{(i)} \mid \mathbf{r}^{(i+1)} \right\rangle_A}{\left\| \mathbf{d}^{(i)} \right\|^2_A} , \quad i = 0,1,2,\ldots , n . \end{equation}
Indeed,
\begin{align*} \left\langle \mathbf{d}^{(k+1)} \mid \mathbf{d}^{(k)} \right\rangle_A &= \left\langle \mathbf{r}^{(k+1)} - \beta_k \mathbf{d}^{(k)} \mid \mathbf{d}^{(k)} \right\rangle_A \\ &= \left\langle \mathbf{r}^{(k+1)} \mid \mathbf{d}^{(k)} \right\rangle_A - \beta_k \left\langle \mathbf{d}^{(k)} \mid \mathbf{d}^{(k)} \right\rangle_A = 0 . \end{align*}
Solving the latter with respect to βk, we obtain formula \eqref{EqCG.8}. Note that the residual vector r(k+1) is A-orthogonal to all previous descent directions except d(k).

The recurrence relation \eqref{EqCG.7} makes conjugate gradient method so formidable because it secures A-orthogonality of all search directions, but not only two sequential ones.


Conjugate Gradien Algorithm

The conjugate gradient method generates two sequences of vectors, x(k) and d(k), k = 0, 1, 2, …. The first sequence always converges (when all calculations are done with exact arithmetic) to the true solution x* = A−1b of the system A x = b. The second sequence is used to determine the iterates at every step
\begin{equation*} \mathbf{x}^{(i+1)} = \mathbf{x}^{(i)} + \alpha_i {\bf d}^{(i)} , \qquad i=0,1,\ldots , n-1. \tag{6} \end{equation*}
Here the vector d(i) is the search direction and the scalar αi is the step size.

At each iterative step, the only one of these directions is used to define the next iterative value according to the formula \eqref{EqCG.6}. ϕ 𝔎

The conjugate gradient method generates a finite set of A-orthogonal search directions     d(0), d(1), … , d(n-1) by picking up a point of local minimum for ϕ along that direction:

\[ {\bf d}^{(0)} \perp_A {\bf d}^{(1)} \perp_A \ \cdots \ \perp_A {\bf d}^{(n)} . \]
This A-orthogonality condition is equivalent to conjugate orthogonality:
\[ \left\langle {\bf d}^{(i)} \mid {\bf d}^{(j)} \right\rangle_A = \left\langle {\bf d}^{(i)} \mid {\bf A}\,{\bf d}^{(j)} \right\rangle = 0 \quad \mbox{for} \quad i \ne j . \]

The search directions are A-orthogonal and nonzero as long as the residuals are linearly independent (equivalently, nonzero).

Now it is right time to appreciate the beauty of CG-algorithm proposed by Hestenes & Steifel. It efficiently bypasses full-history recurrence of the Gram--Schmidt process and convert it into the one-step iterative algorithm:


Conjugate Gradient Algorithm: 
x(0) is given 
r(0) = bA x(0)
d(0) = r(0) 
do until converged for k = 1, 2, 3, …
   αk = ⟨r(k) ∣ d(k)⟩/⟨d(k) ∣ Ad(k)⟩     step length
   x(k+1) = x(k) + αkd(k)                 approximate solution
   r(k+1) = r(k) − αkA d(k)               residual
   if ∥r(k)∥ is less than some tolerance, then stop
   βk = ⟨r(k+1) ∣ A d(k)⟩/⟨d(k) ∣ A d(k)⟩       improvement of this step
   d(k+1) = r(k+1) − βkd(k)                     search direction

   
Example 9: We consider a linear system A x = b with \[ {\bf A} = \begin{bmatrix} 15&9&8&-6&-4 \\ 9&19&-3&-7&-3 \\ 8&-3&19&8&-10 \\ -6&-7&8&16&-4 \\ -4&-3&-10&4&15 \end{bmatrix} , \qquad {\bf b} = \begin{pmatrix} 13 \\ -5 \\ 41 \\ 48 \\ 19 \end{pmatrix} , \tag{9.1} \] has the exact solution x (x, x, x, x, x) = (1, 2, 3, 4, 5), which we check with Mathematica:

A = {{15, 9, 8, -6, -4}, {9, 19, -3, -7, -3}, {8, -3, 19, 8, -10}, {-6, -7, 8, 16, -4}, {-4, -3, -10, -4, 15}}; b = {13, -5, 41, 48, 19}; A . {1, 2, 3, 4, 5} == b
True

Since we want to apply conjugate gradient method, we need to check first whether matrix A is symmetric and positive definite. Using Mathematica, we find

A = {{15, 9, 8, -6, -4}, {9, 19, -3, -7, -3}, {8, -3, 19, 8, -10}, {-6, -7, 8, 16, -4}, {-4, -3, -10, -4, 15}}; Transpose[A] == A
True
N[Eigenvalues[A]]
{33.6019, 31.9488, 11.0237, 6.63606, 0.789482}

So Mathematica gives us a green light for CG utilization. To apply the conjugate gradient method, we pick up an initial guess x(0) = (0, 0, 0, 0, 0). Since b = (13, −5, 41, 48, 19), the initial residual is r(0) = b - A x(0) = b.

A = {{15, 9, 8, -6, -4}, {9, 19, -3, -7, -3}, {8, -3, 19, 8, -10}, {-6, -7, 8, 16, -4}, {-4, -3, -10, -4, 15}}; b = {13, -5, 41, 48, 19}; x0 = {0, 0, 0, 0, 0}; r0 = {13, -5, 41, 48, 19}; d0 = r0
{13, -5, 41, 48, 19}
Since size of matrix A is small, we can find its inverse, and they apply to b.
Inverse[A]
The solving the given equation exactly, we get
Inverse[A].b
To perform the first iteration, we need to calculate two parameters (α₁, β₁) and search vector d(1): \[ \alpha_0 = \frac{\left\langle \mathbf{r}^{(0)} \,\mid\, \mathbf{d}^{(0)} \right\rangle}{\left\langle \mathbf{d}^{(0)} \,\mid\, {\bf A}\,\mathbf{d}^{(0)} \right\rangle} = \frac{2270}{44447} \approx 0.0510721 , \]
a0 = (r0 . d0)/(d0 . A . d0)
2270/44447
\[ \mathbf{x}^{(1)} = \mathbf{x}^{(0)} + \alpha_0 \mathbf{d}^{(0)} \approx \begin{pmatrix} 0.663937 \\ -0.25536 \\ 2.09395 \\ 2.45146 \\ 0.970369 \end{pmatrix} . \]
x1 = x0 + a0*d0
{2270/3419, -(11350/44447), 93070/44447, 108960/44447, 43130/44447}
\[ \mathbf{r}^{(1)} = \mathbf{r}^{(0)} - \alpha_0 {\bf A}\, \mathbf{d}^{(0)} \approx \begin{pmatrix} 7.17778 \\ 20.2296 \\ -14.7707 \\ -1.89741 \\ 37.0795 \end{pmatrix} . \]
r1 = r0 - a0*A . d0
{319031/44447, 69165/3419, -(50501/3419), -(84334/ 44447), 1648073/44447}
Next, \[ \beta_0 = \frac{\left\langle \mathbf{r}^{(1)} \,\mid\, {\bf A}\,\mathbf{d}^{(0)} \right\rangle}{\left\langle \mathbf{d}^{(0)} \,\mid\, {\bf A}\,\mathbf{d}^{(0)} \right\rangle} = -\frac{56497}{2171528}, \qquad \mathbf{d}^{(1)} = \mathbf{r}^{(1)} - \beta_0 \mathbf{d}^{(0)} = \frac{1}{542882} \begin{bmatrix} -706685 \\ 1994090 \\ -1355725 \end{bmatrix} . \]
b0 = (r1 . A . r0)/(r0 . A . r0)
-(56497/2171528)
d1 = r1 - b0*r0
{-(706685/542882), 997045/2171528, -(1355725/542882)}

Second step: \[ \alpha_1 = \frac{\left\langle \mathbf{r}^{(1)} \mid \mathbf{d}^{(1)} \right\rangle}{\left\langle \mathbf{d}^{(1)} \mid {\bf A}\, \mathbf{d}^{(1)} \right\rangle} = \frac{1201426}{2846565} , \] \[ \mathbf{x}^{(2)} = \mathbf{x}^{(1)} + \alpha_1 \mathbf{d}^{(1)} = \frac{1}{18666} \begin{bmatrix} 17063 \\ 57797 \\ -17399 \end{bmatrix} \approx \begin{bmatrix} 0.914122 \\ 3.09638 \\ -0.932123 \end{bmatrix} , \]

a1 = (r1 . d1)/(d1 . A . d1)
1201426/2846565
x2 = x1 + a1*d1
{17063/18666, 57797/18666, -(17399/18666)}
\[ \mathbf{r}^{(2)} = \mathbf{r}^{(1)} - \alpha_1 {\bf A}\,\mathbf{d}^{(1)} = \approx \begin{bmatrix} -0.992467 \\ 2.47237 \\ 1.87196 \end{bmatrix} . \]
r2 = r1 - a1*B.d1
{-(21280908528/ 21442442023), 53013589200/21442442023, 40139382096/21442442023}
\[ \beta_1 = \frac{\left\langle \mathbf{r}^{(2)} \mid {\bf A}\,\mathbf{d}^{(1)} \right\rangle}{\left\langle \mathbf{d}^{(1)} \mid {\bf A}\,\mathbf{d}^{(1)} \right\rangle} \approx -0.0142995 . \]
b1 = (r2 . B . d1)/(d1 . B . d1)
-(15272836504900317307008/1068065037150271040464867)
\[ \mathbf{d}^{(2)} = \mathbf{r}^{(2)} - \beta_1 \mathbf{d}^{(1)} = \frac{1}{174209778} \begin{bmatrix} 9241295 \\ - 10371235 \\ -7304255 \end{bmatrix} . \]
d2 = r2 - b1*d1
{9241295/174209778, -(10371235/174209778), -(7304255/174209778)}

Step three: \[ \alpha_2 = \frac{\left \langle \mathbf{r}^{(2)} \mid \mathbf{d}^{(2)} \right\rangle}{\left \langle \mathbf{d}^{(2)} \mid {\bf A}\,\mathbf{d}^{(2)} \right\rangle} = \frac{9333}{5765} \approx 1.61891 , \]

a2 = (r2 . d2)/(d2 . A . d2)
9333/5765
\[ \mathbf{r}^{(3)} = \mathbf{x}^{(2)} + \alpha_2 \mathbf{d}^{(2)} = \left( 1, 3, -1 \right) . \]
x3 = x2 + a2*d2
{1, 3, -1}
   ■
End of Example 9
   
Example 10: We consider a linear system \[ \begin{cases} 2\,x+3\,y -z &= 11 . \\ 3\,x + y -2\,z &= 9 . \\ x -3\,y + 4\,z &= 1, \end{cases} \tag{10.1} \] that has a nonsymmetric matrix \[ {\bf A} = \begin{bmatrix} 2&\phantom{-}3&-1 \\ 3&\phantom{-}1&-2 \\ 1&-3&\phantom{-}4 \end{bmatrix} \in \mathbb{R}^{3\times 3} , \qquad {\bf b} = \begin{bmatrix} 11 \\ 9 \\ 1 \end{bmatrix} \in \mathbb{R}^{3\times 1} . \] Then we can rewrite system (10.1) in concise form A x = b. This system has the exact solution x (x, y, z) = (3, 2, 1). Since matrix A for system (9.1) is not symmetrical, we multiply both sides of equation A x = b by transposed matrix AT. This yields a new system B x = f.with a new symmetric matrix

\[ {\bf B} = {\bf A}^{\mathrm T} {\bf A} = \begin{bmatrix} 14&\phantom{-}6&-4 \\ \phantom{-}6& \phantom{-}19& -17 \\ -4& -17& \phantom{-}21 \end{bmatrix} , \qquad {\bf f} = {\bf A}^{\mathrm T} {\bf b} = \begin{bmatrix} \phantom{-}50 \\ \phantom{-}39 \\-25 \end{bmatrix} \in \mathbb{R}^{3 \times 1} . \tag{10.2} \]
A = {{2, 3, -1}, {3, 1, -2}, {1, -3, 4}} B = Transpose[A] . A f = Transpose[A] . {11, 9, 1}
{50, 39, -25}

Since matrix B is symmetric, we need to check whether this matrix is positive definite. So using Mathematica, we find its eigenvalues

N[Eigenvalues[B]]
{39.0093, 12.2868, 2.70396}
Since eigenvalues of matrix B are positive, we can apply the conjugate gradient method to solve system B x = f. \[ \lambda_1 \approx 39.009 , \quad \lambda_2 \approx 12.2868 , \quad \lambda_3 \approx 2.70396 . \]

To apply the conjugate gradient method, we pick up an initial guess x(0) = (0, 0, 0). Since f = (50, 39, −25), the initial residual vector is r(0) = f - B x(0) = f = (50, 39, -25). The initial search vector is equal to the residual: d(0) = f.

B = {{14, 6, -4}, {6, 19, -17}, {-4, -17, 21}}; f = {50, 39, -25} x0 = {0, 0, 0}; r0 = f d0 = f
{50, 39, -25}
To perform the first iteration, we need to calculate two parameters (α₁, β₁) and search vector d(1): \[ \alpha_0 = \frac{\left\langle \mathbf{r}^{(0)} \mid \mathbf{r}^{(0)} \right\rangle}{\left\langle \mathbf{d}^{(0)} \mid {\bf B}\,\mathbf{d}^{(0)} \right\rangle} = \frac{2323}{71787} , \qquad \mathbf{r}^{(1)} = \mathbf{r}^{(0)} - \alpha_0 {\bf B}\, \mathbf{d}^{(0)} = \frac{1}{71787} \begin{bmatrix} 1187368 \\ 605825 \\ 1429649 \end{bmatrix} . \]
d02 = Dot[r0, B . r0] r02 = r0.r0
143574
4646
a0 = r02/d02 r1 = r0 - a0*B . r0
{1187368/71787, -(605825/71787), 1429649/71787}
x1 = x0 + a0*r0
{116150/71787, 30199/23929, -(58075/71787)}
\[ \mathbf{x}^{(1)} = \mathbf{x}^{(0)} + \alpha_0 \mathbf{d}^{(0)} = \frac{1}{71787} \begin{bmatrix} 116150 \\ 90597 \\ -58075 \end{bmatrix} \approx \begin{bmatrix} 1.61798 \\ 1.26203 \\ -0.80899 \end{bmatrix} \] Next, \[ \beta_0 = \frac{\left\langle \mathbf{r}^{(1)} \,\mid\, {\bf B}\,\mathbf{d}^{(0)} \right\rangle}{\left\langle \mathbf{d}^{(0)} \,\mid\, {\bf B}\,\mathbf{d}^{(0)} \right\rangle} = -\frac{274125625}{1717791123)} \approx -0.15958, \] \[ \mathbf{d}^{(1)} = \mathbf{r}^{(1)} - \beta_0 \mathbf{d}^{(0)} = \begin{bmatrix} 14039603374/572597041 \\ -(3805887050/ 1717791123) \\ 27356930296/1717791123 \end{bmatrix} . \]
b0 = (r1 . B . r0)/(r0 . B . r0)
-(274125625/1717791123)
d1 = r1 - b0*r0
{14039603374/572597041, -(3805887050/ 1717791123), 27356930296/1717791123

Second step: \[ \alpha_1 = \frac{\left\langle \mathbf{r}^{(1)} \mid \mathbf{d}^{(1)} \right\rangle}{\left\langle \mathbf{d}^{(1)} \mid {\bf B}\, \mathbf{d}^{(1)} \right\rangle} = \frac{6559552080625}{99621585638858} \approx 0.0658447 , \] \[ \mathbf{x}^{(2)} = \mathbf{x}^{(1)} + \alpha_1 \mathbf{d}^{(1)} \approx \begin{bmatrix} 3.23244 \\ 1.11614 \\ 0.239629 \end{bmatrix} , \]

a1 = (r1 . d1)/(d1 . B . d1)
6559552080625/99621585638858
x2 = x1 + a1*d1
{12231417025/3783960357, 71798406164/64327326069, \ 15414664975/64327326069}
\[ \mathbf{r}^{(2)} = \mathbf{r}^{(1)} - \alpha_1 {\bf B}\,\mathbf{d}^{(1)} \approx \begin{bmatrix} -0.992467 \\ 2.47237 \\ 1.87196 \end{bmatrix} . \]
r2 = r1 - a1*B.d1
{-(21280908528/ 21442442023), 53013589200/21442442023, 40139382096/21442442023}
\[ \beta_1 = \frac{\left\langle \mathbf{r}^{(2)} \mid {\bf B}\,\mathbf{d}^{(1)} \right\rangle}{\left\langle \mathbf{d}^{(1)} \mid {\bf B}\,\mathbf{d}^{(1)} \right\rangle} \approx -0.0142995 , \]
b1 = (r2 . B . d1)/(d1 . B . d1)
-(5428820/5313398229)
\[ \mathbf{d}^{(2)} = \mathbf{r}^{(2)} - \beta_1 \mathbf{d}^{(1)} \approx \begin{bmatrix} -0.641854 \\ 2.44069 \\ 2.09969 \end{bmatrix} . \]
d2 = r2 - b1*d1
{-(17359440703215930000/ 27045783524100960737), \ 1122174285319906830000/459778319909716332529, \ 965391393082027230000/459778319909716332529}

Step three: \[ \alpha_2 = \frac{\left \langle \mathbf{r}^{(2)} \mid \mathbf{d}^{(2)} \right\rangle}{\left \langle \mathbf{d}^{(2)} \mid {\bf B}\,\mathbf{d}^{(2)} \right\rangle} = \frac{9333}{5765} \approx 1.61891 , \]

a2 = (r2 . d2)/(d2 . B . d2)
21442442023/59211135000
\[ \mathbf{r}^{(3)} = \mathbf{x}^{(2)} + \alpha_2 \mathbf{d}^{(2)} = \left( 1, 3, -1 \right) . \]
x3 = x2 + a2*d2
{3, 2, 1}
   ■
End of Example 10

Properties of CG-recurrence

All properties of the conjugate gradient algorithm are formulated for exact arithmetic because errors associated with machine arithmetic can destroy the conjugancy and orthogonality relationships. . Hence, the final residual vector r(n) may never vanish in actual calculations. Moreover, in many settings, the size of the problem n is a large number, so we would like to stop iterating after much smaller number of iterations, which motivates the derivation of error estimates for the conjugate gradient method.

Lemma 2: Let A ∈ ℝn×n be symmetric and positive definite. Let x(k+1) = x(k) + αkd(k) be obtained from an exact line search. Then residuals are A-orthogonal to the previous search vectors, that is, \begin{align} &\left\langle \mathbf{r}^{(i)} \,\mid \,{\bf A}\, \mathbf{d}^{(j)} \right\rangle =\left\langle \mathbf{r}^{(i)} \,\mid \,\mathbf{d}^{(j)} \right\rangle_A \notag \\ &= \begin{cases} \frac{1}{\alpha_i}\,\left\langle \mathbf{r}^{(i)} \,\mid \,\mathbf{r}^{(i)} \right\rangle , & \quad \mbox{if} \quad i = j , \\ \frac{-1}{\alpha_{i-1}}\,\left\langle \mathbf{r}^{(i)} \,\mid \, \mathbf{r}^{(i)} \right\rangle , & \quad \mbox{if} \quad i = j+1 , \\ 0 , & \quad \mbox{otherwise}. \end{cases} \label{EqCG.10} \end{align}
We use induction on the size of indices i, j. When i, j ≤ 1, proposition holds by construction of the initial algorithm's step. Assume that the proposition is valid whenever i, jk for some integer k ≥ 1. We show that the statement is true for i, jk+1 by establishing the following claim. For j = 0, 1, 2, … , k, ⟨rk+1d(j) ⟩ = 0.

Using the recurrence for residuals, r(k+1) = r(k) − αkA d(k), we obtain \[ \left\langle \mathbf{r}^{(k+1)} \,\mid\, \mathbf{d}^{(k)} \right\rangle = \left\langle \mathbf{r}^{(k)} - \alpha_k {\bf A}\,{\bf d}^{(k)} \,\mid\, \mathbf{d}^{(k)} \right\rangle = 0 \] because step size αk is chosen according to formula (9) to satisfy this equation.

   
Example 11: We consider a linear system A x = b, where \[ \mathbf{A} = \begin{bmatrix} 873& -2162& 462 \\ -2162& 5361& -1146 \\ 462& -1146& 245 \end{bmatrix} , \qquad \mathbf{b} = \begin{pmatrix} -2065 \\ 5122 \\ -1095 \end{pmatrix} . \] We check with Mathematica that matrix A is positive definite.
A = {{873, -2162, 462}, {-2162, 5361, -1146}, {462, -1146, 245}} N[Eigenvalues[A]]
{6478., 1., 0.000154369}
Fortunately, we don't need to check positiveness of eigenvalues because Mathematica has a dedicated command:
PositiveDefiniteMatrixQ[A]
True
Despite that Mathematica can easily solve this 3×3 system of linear equations
b = {-2065, 5122, -1095};
LinearSolve[A, b]
{1, 2, 3}
we apply the CG method. Let us choose the initial guess to be zero.
x0 = {0, 0, 0};
r0 = b; d0 = r0;
The first iterative step starts with evaluation (using Mathematica) of step length:
a0 = (r0 . d0)/(d0 . A . d0)
15849067/102670250987
\[ \alpha_0 = \frac{\left\langle \mathbf{r}^{(0)} \mid \mathbf{d}^{(0)} \right\rangle}{\left\langle \mathbf{d}^{(0)} \mid {\bf A}\,\mathbf{d}^{(0)} \right\rangle} = \frac{15849067}{102670250987} \approx 0.000154369 . \] We also check validity of Eq.(10):
a00 = (r0.r0)/(d0.A.d0) a00 == a0
True
Then we check formular (9) with i = j = 0:
r0 . A . d0 == (r0 . r0)/a0
True
Then the first iterate is \[ \mathbf{x}^{(1)} = \mathbf{x}^{(0)} + \alpha_0 \mathbf{d}^{(0)} = \frac{1}{102670250987} \begin{pmatrix} -32728323355 \\ 81178921174 \\ - 17354728365 \end{pmatrix} \approx \begin{pmatrix} -0.318771 \\ 0.790676 \\ -0.169034 \end{pmatrix} . \]
x1 = x0 + a0*d0
{-(32728323355/102670250987), 81178921174/102670250987, -(17354728365/ 102670250987)
The first residual becomes \[ \mathbf{r}^{(1)} = \mathbf{r}^{(0)} - \alpha_0 {\bf A}\,\mathbf{d}^{(0)} \approx \begin{pmatrix} 0.822732 \\ 0.289035 \\ -0.199545 \end{pmatrix} . \]
r1 = r0 - a0*A . d0
{84470083578/102670250987, 29675341800/102670250987, -(20487325926/ 102670250987)}
The first Gram--Schmidt coefficient is \[ \beta_0 = \frac{\left\langle \mathbf{r}^{(1)} \mid {\bf A}\,\mathbf{d}^{(0)} \right\rangle}{\left\langle \mathbf{d}^{(0)} \mid {\bf A}\,\mathbf{d}^{(0)} \right\rangle} = -\frac{266121389171340}{10541180437733574474169} \approx -2.52459 \cdot 10^{-8} . \]
b0 = (r1 . A . d0)/(d0 . A . d0)
-(266121389171340/10541180437733574474169)
So \[ \mathbf{d}^{(1)} = \mathbf{r}^{(1)} - \beta_0 \mathbf{d}^{(0)} \approx \begin{pmatrix} 0.82268 \\ 0.289165 \\ -0.199573 \end{pmatrix} . \]
d1 = r1 - b0* d0
{8672015141177488174386/10541180437733574474169, \ 3048137864486347960080/10541180437733574474169, -( 2103730297796034806262/10541180437733574474169)}
Second step. \[ \alpha_1 = \frac{\left\langle \mathbf{r}^{(1)} \mid \mathbf{d}^{(1)} \right\rangle}{\left\langle \mathbf{d}^{(1)} \mid {\bf A}\,\mathbf{d}^{(1)} \right\rangle} = \frac{ 151793054551281009706181}{151793002144194391021093} \approx 1. . \]
a1 = (r1 . d1)/(d1 . A . d1)
151793054551281009706181/151793002144194391021093
Then \[ \mathbf{x}^{(2)} = \mathbf{x}^{(1)} + \alpha_1 \mathbf{d}^{(1)} \approx \begin{pmatrix} 0.503909 \\ 1.07984 \\ -0.368606 \end{pmatrix} . \]
x2= x1 + a1*d1
{4826140276140607/9577409329154479, \ 10342079543239118/9577409329154479, -(3530293255022019/ 9577409329154479)}
The second residual becomes \[ \mathbf{r}^{(2)} = \mathbf{r}^{(1)} - \alpha_1 {\bf A}\,\mathbf{d}^{(1)} \approx \begin{pmatrix} 0.0000762762 \\ 0.000141937 \\ 0.000520081 \end{pmatrix} . \]
r2 = r1 - a1*A . d1
{730528396848/9577409329154479, 1359385088400/9577409329154479, \ 4981031308944/9577409329154479}
The second search vector is \[ \mathbf{d}^{(2)} = \mathbf{r}^{(2)} - \beta_1 \mathbf{d}^{(1)} \approx \begin{pmatrix} 0.000076581 \\ 0.000142044 \\ 0.000520007 \end{pmatrix} , \]
d2 = r2 - b1* d1
{7024524003971381624731137936/91726769458175247472493045761441, \ 13029213198372174293243479920/91726769458175247472493045761441, \ 47698594284556087515032455728/91726769458175247472493045761441
where \[ \beta_1 = \frac{\left\langle \mathbf{r}^{(2)} \mid {\bf A}\,\mathbf{d}^{(1)} \right\rangle}{\left\langle \mathbf{d}^{(1)} \mid {\bf A}\,\mathbf{d}^{(1)} \right\rangle} \approx -3.70446 \cdot 10^{-7} . \]
b1 = (r2 . A . d1)/(d1 . A . d1)
-(538548688969937441019470764111392/\ 1453783714836173194933122939307144425547)
We check validity formula (0) with Mathematica.With i = j = 2, we have
r2.A.d2 == (r2.r2)/a2
True
For i = 2 and j = 1, we have
r2.A.d1 == -(r2.r2)/a1
True
For i = 1 and j = 0, we have
r1.A.d0 == -(r1.r1)/a0
True
However, for i = 2 and j = 0, we have
r2.A.d0 == 0
True
The third iterative step. \[ \alpha_2 = \frac{\left\langle \mathbf{r}^{(2)} \mid \mathbf{d}^{(2)} \right\rangle}{\left\langle \mathbf{d}^{(2)} \mid {\bf A}\,\mathbf{d}^{(2)} \right\rangle} = 6478 . \]
a2 = (r2 . d2)/(d2 . A . d2)
9577409329154479/1478452162063
\[ \mathbf{x}^{(3)} = \mathbf{x}^{(2)} + \alpha_2 \mathbf{d}^{(2)} = \begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix} . \]
x3= x2 + a2*d2
{1, 2, 3}
Since the residual vector is zero, we stop CG iteration.    ■
End of Example 11
Corollary 1: Let A ∈ ℝn×n be symmetric and positive definite. Let x(k+1) = x(k) + αkd(k) be obtained from an exact line search. Then the error vectors e(k) = xx(k) = A−1bx(k) satisfy the recurrence \[ \mathbf{e}^{(k+1)} = \mathbf{e}^{(k)} - \alpha_k \mathbf{d}^{(k)} , \qquad k =0, 1, 2 , \ldots ; \] because A e(k+1) = r(k+1), k = 0, 1, …. The error at each step is conjugate to all of the previous search directions: \[ \mathbf{e}^{(j)} \perp_A \mathbf{d}^{(i)} = 0 , \qquad \mbox{for} \quad i = 0, 1, \ldots , j-1 . \]
We use induction on the size of indices i, j. When i, j ≤ 1, proposition holds by construction of the initial algorithm's step. Assume that the proposition is valid whenever i, jk for some integer k ≥ 1. We show that the statement is true for i, jk+1 by establishing the following claim. For j = 0, 1, 2, … , k, ⟨rk+1d(j) ⟩ = 0.

Using the recurrence for residuals, r(k+1) = r(k) − αkA dsup>(k), we obtain \[ \left\langle \mathbf{r}^{(k+1)} \,\mid\, \mathbf{d}^{(k)} \right\rangle = \left\langle \mathbf{r}^{(k)} - \alpha_k {\bf A}\,{\bf d}^{(k)} \,\mid\, \mathbf{d}^{(k)} \right\rangle = 0 \] because step size αk is chosen according to formula (9) to satisfy this equation.

   
Example 12:
   ■
End of Example 12
Lemma 3: If A ∈ ℝn×n is symmetric and positive definite, then distincs search vectors are A-conjugate: \[ \mathbf{d}^{(i)} \perp_A \mathbf{d}^{(j)} =0 \quad \iff \quad \left\langle \mathbf{d}^{(i)} \,\mid\, {\bf A}\,\mathbf{d}^{(j)} \right\rangle = 0 \quad\mbox{for} \quad i\ne j . \]
We use induction on the size of indices i, j. When i, j ≤ 1, proposition holds by construction of the initial algorithm's step. Assume that the proposition is valid whenever i, jk for some integer k ≥ 1. We show that the statement is true for i, jk+1 by establishing the following claim. For j = 0, 1, 2, … , k, ⟨rk+1d(j) = 0. Using the Gram--Schmidt process applied to the energy inner product, we obtain \[ \mathbf{d}^{(k+1)} = \mathbf{r}^{(k+1)} - \sum_{i=0}^k \left( \frac{\left\langle \mathbf{r}^{(k+1)} \,\mid\, \mathbf{d}^{(i)} \right\rangle_A}{\left\langle \mathbf{d}^{(i)} \,\mid\, \mathbf{d}^{(i)} \right\rangle_A} \right) \mathbf{d}^{(i)} . \]
   
Example 13:
   ■
End of Example 13
Lemma 4: If A ∈ ℝn×n is symmetric and positive definite, then distincs residuals are orthogonal: \[ \mathbf{r}^{(i)} \perp \mathbf{r}^{(j)} =0 \quad \iff \quad \left\langle \mathbf{r}^{(i)} \,\mid\, \mathbf{r}^{(j)} \right\rangle = 0 \quad\mbox{for} \quad i\ne j . \]
   
Example 14:
   ■
End of Example 14

Theorem 4: Step size coefficients αk and βk can be evaluated alternatavely as long as d(i) ≠ 0,
\begin{equation} \label{EqCG.9} \alpha_i = \frac{\left\langle {\bf d}^{(i)} \mid {\bf r}^{(i)} \right\rangle}{\left\langle {\bf d}^{(i)} \mid {\bf A}\,{\bf d}^{(i)} \right\rangle} = \frac{\left\langle {\bf r}^{(i)} \mid {\bf r}^{(i)} \right\rangle}{\left\langle {\bf d}^{(i)} \,\mid\, {\bf d}^{(i)} \right\rangle_A}, \qquad i =0,1,2,\ldots , n-1 . \end{equation}
where r(k) = bAx(k) is the residual vector. Also, \[ \beta_i = - \frac{\left\langle {\bf r}^{(i+1)} \mid {\bf r}^{(i+1)} \right\rangle}{\left\langle {\bf r}^{(i)} \,\mid\,{\bf r}^{(i)} \right\rangle} = - \frac{\left\| {\bf r}^{(i+1)} \right\|^2}{\left\| {\bf r}^{(i)} \right\|^2}, \qquad i =0,1,2,\ldots , n-1 . \tag{8a}\]
From recurrence (7), it follows that \[ \left\langle \mathbf{d}^{(k+1)} \mid \mathbf{r}^{(k+1)} \right\rangle = \left\langle \mathbf{r}^{(k+1)} \mid \mathbf{r}^{(k+1)} \right\rangle = \left\| \mathbf{r}^{(k+1)} \right\|^2_2 \] because all previous descent vectors d(i) are orthogonal to the residual vector r(k+1) for i = 0, 1, … , k.
   
Example 15:
   ■
End of Example 15

The conjugate gradient method has a memory in the sense that, at every step, it maintains conjugancy of all previously obtained search vectors and not the single one preceding entry. In order to maintain this orthogonality, we need to subtract from the residual r(k+1) all previously obtained search vectors that constructed by the Gram--Schmidt process:

\[ \mathbf{d}^{(k+1)} = \mathbf{r}^{(k+1)} - \sum_{i=0}^k \left( \frac{\left\langle \mathbf{r}^{(k+1)} \,\mid\, \mathbf{d}^{(i)} \right\rangle_A}{\left\langle \mathbf{d}^{(i)} \,\mid\, \mathbf{d}^{(i)} \right\rangle_A} \right) \mathbf{d}^{(i)} , \]
where every term \( \quad \left( \frac{\left\langle \mathbf{r}^{(k+1)} \,\mid\, \mathbf{d}^{(i)} \right\rangle_A}{\left\langle \mathbf{d}^{(i)} \,\mid\, \mathbf{d}^{(i)} \right\rangle_A} \right) \mathbf{d}^{(i)} \quad \) is the projection of the residual vector r(k+1) on the descent direction vector d(i). Fortunately, all coefficients in this sum are zeroes except the last one because
\[ \left\langle \mathbf{r}^{(k+1)} \,\mid\, \mathbf{d}^{(i)} \right\rangle_A = 0 \qquad \mbox{for} \quad i = 0, 1, 2, \ldots , k . \]
Also, from the previous theorem, we obtain that
\[ \left\langle \mathbf{r}^{(k+1)} \,\mid\, \mathbf{d}^{(k+1)} \right\rangle = \left\langle \mathbf{r}^{(k+1)} \,\mid\, \mathbf{r}^{(k+1)} \right\rangle = \left\| \mathbf{r}^{(k+1)} \right\|_2^2 . \]
Thus, ⟨ r(k)d(k) ⟩ > 0 as long as r(k) ≠ 0. This implies that d(k) ≠ 0.
Theorem 5: Let A ∈ ℝn×n be symmetric and positive definite. After m steps of the conjugate gradient algorithm (with r(k) ≠ 0 at each previous step), we have \begin{align*} \mbox{span}\left\{ \mathbf{d}^{(0)} , \mathbf{d}^{(1)} , \ldots , \mathbf{d}^{(m)} \right\} &= \mbox{span}\left\{ \mathbf{r}^{(0)} , \mathbf{r}^{(1)} , \ldots , \mathbf{r}^{(m)} \right\} \\ &= \mbox{span}\left\{ \mathbf{r}^{(0)} , {\bf A}\,\mathbf{r}^{(0)} , \ldots , {\bf A}^m \mathbf{r}^{(0)} \right\} . \end{align*}
Proof by induction on m. When m = 0, the proposition is trivial because d(0) = r(0). Assume that the assessment is valid up to m = k. By residual update equation r(m) = r(m-1) − αm-1<A d(m-1), we have \[ \mathbf{r}^{(k+1)} = \mathbf{r}^{(k)} - \alpha_k {\bf A}\,\mathbf{d}^{(k)} \] and the inductive hypothesis ensures that d(k) ∈ span{r(0), … , Akr(0)}.
   
Example 16:
   ■
End of Example 16
Theorem 6: Let x(k+1) = x(k) + αkd(k) be obtained by an exact line search that minimizes ϕ: \[ \phi \left( {\bf x}^{(k+1)} \right) = \min_{\alpha \in \mathbb{R}} \,\phi \left( {\bf x}^{(k)} + \alpha\,{\bf d}^{(k)} \right) . \] where step size is evaluated according to Eq.(9).
Let φ(α) = ϕ(x(k) + α d(k)). We can find out αk as being the value of the parameter α such that ϕ(x(k) + α d(k)) is minimized. Differentiating with respect to α and setting to zero the derivative at the minimizer, yields
   
Example 17:
   ■
End of Example 17
Corollary 2: If the method maintains conjugancy for all search direction vectors, then the error e(m) satisfy the following inequality: \[ \left\| \mathbf{e}^{(m)} \right\|_A = \min_{\mathbf{d} \in S_m} \left\| \mathbf{e}^{(0)} - \mathbf{d} \right\|_A , \] where Sm = span{d(0), d(1), … , d(m-1)}.

Convergence of CG Algorithm

Theorem 7: Let [d(0), d(1), … , d(n-1)] be an A-orthogonal list of nonzero vectors associated with the positive definite matrix A, and let x(0) be arbitrary. Define step size αk according ro formula (9) for k = 1, 2, … , n. Then, assuming exact arithmetic, A x(n) = b.
Since, for each k = 0, 1, 2, … , n−1, x(k+1) = x(k) + αkd(k), we have \begin{align*} {\bf A}\,\mathbf{x}^{(n)} &= {\bf A}\,\mathbf{x}^{(n-1)} + \alpha_{n-1}{\bf A}\,\mathbf{x}^{(n-1)} \\ &= \left( {\bf A}\,\mathbf{x}^{(n-2)} + \alpha_{n-2}{\bf A}\,\mathbf{x}^{(n-2)} \right) + \alpha_{n-1}{\bf A}\,\mathbf{x}^{(n-1)} \\ \vdots & \qquad \vdots \qquad\qquad \qquad \vdots \\ \\ &= {\bf A}\,\mathbf{x}^{(0)} + \alpha_0 {\bf A}\,\mathbf{d}^{(0)} + \alpha_1 {\bf A}\,\mathbf{d}^{(1)} + \cdots + \alpha_{n-1} {\bf A}\,\mathbf{d}^{(n-1)} \end{align*} Subtracting b from this result yields \[ {\bf A}\, \mathbf{x}^{(n)} - \mathbf{b} = {\bf A}\, \mathbf{x}^{(0)} - \mathbf{b} + \alpha_0 {\bf A}\, \mathbf{d}^{(0)} + \alpha_1 {\bf A}\, \mathbf{d}^{(1)} + \cdots + \alpha_{n-1} {\bf A}\,\mathbf{d}^{(n-1)} \] We now take the inner product of both sides with the vector d(k) and use the properties of inner products and the fact that A is symmetric to obtain \begin{align*} \left\langle {\bf A}\,\mathbf{x}^{(n)} - \mathbf{b} \mid \mathbf{d}^{(k)} \right\rangle &= \left\langle {\bf A}\,\mathbf{x}^{(0)} - \mathbf{b} \mid \mathbf{d}^{(k)} \right\rangle + \alpha_0 \left\langle {\bf A}\,\mathbf{d}^{(0)} \mid \mathbf{d}^{(k)} \right\rangle + \cdots \\ & \quad + \alpha_{n-1} \left\langle {\bf A}\,\mathbf{d}^{(n-1)} \mid \mathbf{d}^{(k)} \right\rangle \\ &= \left\langle {\bf A}\,\mathbf{x}^{(0)} - \mathbf{b} \mid \mathbf{d}^{(k)} \right\rangle + \alpha_{k} \left\langle {\bf A}\,\mathbf{d}^{(k)} \mid \mathbf{d}^{(k)} \right\rangle \end{align*} because direction vectors are mutually A-conjugate. However, αkd(k)A d(k) ⟩ = αkd(k)bA d(k-1) ⟩. Hence, \begin{align*} \alpha_k \left\langle \mathbf{d}^{(k)} \,\mid\,{\bf A}\, \mathbf{d}^{(k)} \right\rangle &= \left\langle \mathbf{d}^{(k)} \,\mid\,\mathbf{b} - {\bf A}\, \mathbf{d}^{(0)} + {\bf A}\, \mathbf{d}^{(0)} - {\bf A}\, \mathbf{d}^{(1)} + \cdots \right. \\ & \quad \left. - {\bf A}\, \mathbf{d}^{(k-2)} + {\bf A}\, \mathbf{d}^{(k-2)} - {\bf A}\, \mathbf{d}^{(k-1)} \right\rangle \\ &= \left\langle \mathbf{d}^{(k)} \,\mid\,{\bf b} - {\bf A}\, \mathbf{d}^{(0)} \right\rangle + \left\langle \mathbf{d}^{(k)} \,\mid\, {\bf A}\, \mathbf{d}^{(0)} - {\bf A}\, \mathbf{d}^{(1)} \right\rangle + \cdots \\ & \quad + \left\langle \mathbf{d}^{(k)} \,\mid\, {\bf A}\, \mathbf{d}^{(k-2)} - {\bf A}\, \mathbf{d}^{(k-1)} \right\rangle . \end{align*} We knpw that for any i \[ \mathbf{x}^{(i)} = \mathbf{x}^{(i-1)} + \alpha_{i-1} \mathbf{d}^{(i-1)} \qquad \Longrightarrow \qquad {\bf A}\,\mathbf{x}^{(i)} = {\bf A}\,\mathbf{x}^{(i-1)} + {\bf A}\, \alpha_{i-1} \mathbf{d}^{(i-1)} , \] so \[ {\bf A}\, \mathbf{x}^{(i-1)} - {\bf A}\, \mathbf{x}^{(i)} = - \alpha_{i-1}{\bf A}\, \mathbf{d}^{(i-1)} \] Thus, \[ \left\langle \mathbf{d}^{(k)} \,\mid\, {\bf A}\, \mathbf{d}^{(k)} \right\rangle = \left\langle \mathbf{d}^{(k)} \,\mid\, {\bf b} - {\bf A}\, \mathbf{x}^{(0)} \right\rangle - \alpha_1 \left\langle \mathbf{d}^{(k)} \,\mid\, {\bf A}\, \mathbf{d}^{(1)} \right\rangle - \cdots - \alpha_{k-1} \left\langle \mathbf{d}^{(k)} \,\mid\, {\bf A}\, \mathbf{d}^{(k-1)} \right\rangle \] Because of the A-orthogonality, ⟨ d(k)A d(j) ⟩ = 0 for kj, so \[ \left\langle \mathbf{d}^{(k)} \,\mid\, {\bf A}\, \mathbf{d}^{(k)} \right\rangle = \left\langle \mathbf{d}^{(k)} \,\mid\, {\bf b} - {\bf A}\, \mathbf{x}^{(0)} \right\rangle \] Using equation \[ \left\langle {\bf A}\,\mathbf{x}^{(n)} - {\bf b} \,\mid \,\mathbf{d}^{(k)} \right\rangle = \left\langle {\bf A}\,\mathbf{x}^{(0)} - {\bf b} \,\mid \,\mathbf{d}^{(k)} \right\rangle + \alpha_k \left\langle {\bf A}\,\mathbf{d}^{(k)} \,\mid \,\mathbf{d}^{(k)} \right\rangle \] we obtain \begin{align*} \left\langle \mathbf{d}^{(k)} \,\mid\,{\bf A}\, \mathbf{x}^{(n)} - {\bf b} \right\rangle &= \left\langle \mathbf{d}^{(k)} \,\mid\,{\bf A}\, \mathbf{x}^{(0)} - {\bf b} \right\rangle + \left\langle \mathbf{d}^{(k)} \,\mid\,{\bf b} - {\bf A}\, \mathbf{x}^{(0)} \right\rangle \\ &= \left\langle \mathbf{d}^{(k)} \,\mid\,{\bf A}\, \mathbf{x}^{(0)} - {\bf b} \right\rangle - \left\langle \mathbf{d}^{(k)} \,\mid\,{\bf A}\, \mathbf{x}^{(0)} - {\bf b} \right\rangle \\ &= 0 . \end{align*} Hence the vector A x(n)b is orthogonal to the A-orthogonal list of vectors [d(0), d(1), … , d(n-1)]. Therefore, A x(n)b = 0.
   
Example 18: The linear system \[ \begin{cases} 4\,x+3\,y \phantom{+4\,x} &= 13 . \\ 3\,x + 4\,y -z &= 16 . \\ \phantom{4\,x} -y + 2\,z &= -5, \end{cases} \tag{18.1} \] has the exact solution x (x, y, z) = (1, 3, −1).

To apply the conjugate gradient method, we pick up an initial guess x(0) = (0, 1, 1). Since b = (13, 16, −5), the initial residual is r(0) = b - A x(0) = (10, 13, -6) because \[ {\bf A} = \begin{bmatrix} 4&3&0 \\ 3&4&-1 \\ 0&-1&2 \end{bmatrix} , \quad {\bf b} = \begin{bmatrix} 13 \\ 16 \\ -5 \end{bmatrix} , \quad {\bf r}^{(0)} = {\bf d}^{(0)} = \begin{bmatrix} 10 \\ 13 \\ -6 \end{bmatrix} . \]

A = {{4, 3, 0}, {3, 4, -1}, {0, -1, 2}}; b = {13, 16, -5}; x0 = {0, 1, 1}; r0 = b - A . x0
{10, 13, -6}
To perform the first iteration, we need to calculate two parameters (α₁, β₁) and search vector d(1): \[ \alpha_0 = \frac{305}{2084} , \qquad \mathbf{r}^{(1)} = \mathbf{r}^{(0)} - \alpha_0 {\bf A}\, \mathbf{d}^{(0)} = \frac{1}{2084} \begin{bmatrix} 3255 \\ 252 \\ 4879 \end{bmatrix} . \]
d02 = Dot[r0, A . r0] r02 = r0.r0
2084
305
a0 = r02/d02 r1 = r0 - a0*A . r0
{-(3255/2084), 63/521, -(4879/2084)}
x1 = x0 + a0*r0
{1525/1042, 6049/2084, 127/1042}
\[ \mathbf{x}^{(1)} = \mathbf{x}^{(0)} + \alpha_0 \mathbf{d}^{(0)} = \frac{1}{2084} \begin{bmatrix} 3050 \\ 6049 \\ 254 \end{bmatrix} . \] Next, \[ \beta_0 = \frac{\left\langle \mathbf{r}^{(1)} \,\mid\, {\bf A}\,\mathbf{d}^{(0)} \right\rangle}{\left\langle \mathbf{d}^{(0)} \,\mid\, {\bf A}\,\mathbf{d}^{(0)} \right\rangle} = -\frac{56497}{2171528}, \qquad \mathbf{d}^{(1)} = \mathbf{r}^{(1)} - \beta_0 \mathbf{d}^{(0)} = \frac{1}{542882} \begin{bmatrix} -706685 \\ 1994090 \\ -1355725 \end{bmatrix} . \]
b0 = (r1 . A . r0)/(r0 . A . r0)
-(56497/2171528)
d1 = r1 - b0*r0
{-(706685/542882), 997045/2171528, -(1355725/542882)}

Second step: \[ \alpha_1 = \frac{\left\langle \mathbf{r}^{(1)} \mid \mathbf{d}^{(1)} \right\rangle}{\left\langle \mathbf{d}^{(1)} \mid {\bf A}\, \mathbf{d}^{(1)} \right\rangle} = \frac{1201426}{2846565} , \] \[ \mathbf{x}^{(2)} = \mathbf{x}^{(1)} + \alpha_1 \mathbf{d}^{(1)} = \frac{1}{18666} \begin{bmatrix} 17063 \\ 57797 \\ -17399 \end{bmatrix} \approx \begin{bmatrix} 0.914122 \\ 3.09638 \\ -0.932123 \end{bmatrix} , \]

a1 = (r1 . d1)/(d1 . A . d1)
1201426/2846565
x2 = x1 + a1*d1
{17063/18666, 57797/18666, -(17399/18666)}
\[ \mathbf{r}^{(2)} = \mathbf{r}^{(1)} - \alpha_1 {\bf A}\,\mathbf{d}^{(1)} = \frac{1}{18666} \begin{bmatrix} 1015 \\ 1120 \\ -245 \end{bmatrix} \approx \begin{bmatrix} 0.0543769 \\ -0.0600021 \\ -0.0393764 \end{bmatrix} . \]
r2 = r1 - a1*A.d1
{1015/18666, -(560/9333), -(245/6222)}
\[ \beta_1 = \frac{\left\langle \mathbf{r}^{(2)} \mid {\bf A}\,\mathbf{d}^{(1)} \right\rangle}{\left\langle \mathbf{d}^{(1)} \mid {\bf A}\,\mathbf{d}^{(1)} \right\rangle} \approx -0.0142995. . \]
b1 = (r2 . B . d1)/(d1 . A . d1)
\[ \mathbf{d}^{(2)} = \mathbf{r}^{(2)} - \beta_1 \mathbf{d}^{(1)} = \frac{1}{174209778} \begin{bmatrix} 9241295 \\ - 10371235 \\ -7304255 \end{bmatrix} . \]
d2 = r2 - b1*d1
{9241295/174209778, -(10371235/174209778), -(7304255/174209778)}

Step three: \[ \alpha_2 = \frac{\left \langle \mathbf{r}^{(d)} \mid \mathbf{2}^{(2)} \right\rangle}{\left \langle \mathbf{d}^{(2)} \mid {\bf A}\,\mathbf{d}^{(2)} \right\rangle} = \frac{9333}{5765} \approx 1.61891 , \]

a2 = (r2 . d2)/(d2 . A . d2)
9333/5765
\[ \mathbf{r}^{(3)} = \mathbf{x}^{(2)} + \alpha_2 \mathbf{d}^{(2)} = \left( 1, 3, -1 \right) . \]
x3 = x2 + a2*d2
{1, 3, -1}
There is no need to perform further calculations.    ■
End of Example 18
Theorem 8: Let A be an n-by-n symmetric and positive definite matrix. In exact arithmetic, the conjugate gradient method for solving system A x = b converges after at most n steps. Moreover, the error ek = xx(k) at the kth iteration (with k < n) is orthogonal to search vector d(j) for j = 0, 1, 2, … , k − 1 and \[ \left\| \mathbf{e}_k \right\|_A \leqslant \frac{2\,c^k}{1 + c^{3k}} \, \left\| \mathbf{e}_0 \right\|_A , \] with \( \displaystyle \quad c = \frac{\sqrt{\kappa (\mathbf{A})} -1}{\sqrt{\kappa (\mathbf{A})} +1} . \quad \) Here \( \displaystyle \quad \kappa (\mathbf{A}) = \| \mathbf{A}^{-1} \| \,\| \mathbf{A} \| \quad \) is the condition number, which depends on choice of norm of the matrix.
LeVeque, page 78 / in pdf it is page 83 see for instance [QSS07, Chapter 4] or [Saa03]. Quarteroni A., Sacco R., and Saleri F. (2007) Numerical Math- ematics. 2nd edition. Texts in Applied Mathematics. Springer- Verlag, Berlin. Saad Y. (2003) Iterative Methods for Sparse Linear Systems. 2nd edition. SIAM publications, Philadelphia, PA. [Sal08] Salsa S. (2008) Partial Differential Equations in Action From Modelling to Theory. Springer, Milan.
   
Example 19:
   ■
End of Example 19
   
Example 20: We build a 6 × 6 symmetric positive definite matrix with the aid of Kronecker product. First, we choose some orthogonal matrices of dimensions 3 and 2: \[ {\bf B}_3 = \frac{1}{3} \begin{bmatrix} 2&-2&1 \\ 1&2&2 \\ -2&-1&2 \end{bmatrix} , \qquad {\bf C}_2 = \frac{1}{5} \begin{bmatrix} 3&4 \\ 4&-3 \end{bmatrix} . \]
B3 = (1/3) * {{2, -2, 1}, {1, 2, 2}, {-2, -1, 2}} C2 = (1/5)* {{3, 4}, {4, -3}}
Define their Kronecker product
Q6 = KroneckerProduct[B3, C2]
{{2/5, 8/15, -(2/5), -(8/15), 1/5, 4/15}, {8/15, -(2/5), -(8/15), 2/5, 4/15, -(1/5)}, {1/5, 4/15, 2/5, 8/15, 2/5, 8/15}, {4/15, -(1/5), 8/ 15, -(2/5), 8/15, -(2/5)}, {-(2/5), -(8/15), -(1/5), -(4/15), 2/5, 8/15}, {-(8/15), 2/5, -(4/15), 1/5, 8/15, -(2/5)}}
\[ {\bf Q}_6 = \frac{1}{15} \begin{bmatrix} 6& 8& -6& -8& 3&4 \\ 8& -6& -8& 6& 4& -3 \\ 3& 4& 6& 8& 6&8 \\ 4&-3 & 8&-6 &8&-6 \\ -6&-8& -3& -4&6&8 \\ -8&6& -4& 3& 8&-6 \end{bmatrix} . \] Its transpose is \[ {\bf Q}_{66} = \frac{1}{15} \begin{bmatrix} 6& 8& 3& 4& -6&-8 \\ 8& -6& 4& -3& -6& 6 \\ -6& -8& 6& 8& -3&-4 \\ -8&6 & 8&-6 &-4&-3 \\ 3 & 4& 6& 8&6&8 \\ 4& -3& 8& -6& 8&-6 \end{bmatrix} . \]
Q66 = Transpose[Q6]
{{2/5, 8/15, 1/5, 4/15, -(2/5), -(8/15)}, {8/15, -(2/5), 4/ 15, -(1/5), -(8/15), 2/5}, {-(2/5), -(8/15), 2/5, 8/ 15, -(1/5), -(4/15)}, {-(8/15), 2/5, 8/15, -(2/5), -(4/15), 1/ 5}, {1/5, 4/15, 2/5, 8/15, 2/5, 8/15}, {4/15, -(1/5), 8/15, -(2/5), 8/15, -(2/5)}}
Then we build the diagonal matrix \[ {\bf D}_6 = \begin{bmatrix} 15&0&0&0&0&0 \\ 0&30&0&0&0&0 \\ 0&0&15&0&0&0 \\ 0&0&o&30&0&0 \\ 0&0&0&0&15&0 \\ 0&0&0&0&0&45 \end{bmatrix} . \]
D6 = DiagonalMatrix[{15, 30, 15, 30, 15, 45}]
{{15, 0, 0, 0, 0, 0}, {0, 30, 0, 0, 0, 0}, {0, 0, 15, 0, 0, 0}, {0, 0, 0, 30, 0, 0}, {0, 0, 0, 0, 15, 0}, {0, 0, 0, 0, 0, 45}}
The required matrix becomes
A = Q66.D6.Q6*15
{{433, -156, 32, -24, -64, 48}, {-156, 342, -24, 18, 48, -36}, {32, -24, 385, -120, -32, 24}, {-24, 18, -120, 315, 24, -18}, {-64, 48, -32, 24, 433, -156}, {48, -36, 24, -18, -156, 342}}
\[ {\bf A} = \begin{bmatrix} 433& -156& 32& -24& -64& 48 \\ -156& 342& -24& 18& 48& -36 \\ 32& -24& 385& -120& -32& 24 \\ -24& 18& -120& 315& 24& -18 \\ -64& 48& -32& 24& 433& -156 \\ 48& -36& 24& -18& -156& 342 \end{bmatrix} . \tag{20.1} \] We check whether such constructed matrix is positive definite
A = {{433, -156, 32, -24, -64, 48}, {-156, 342, -24, 18, 48, -36}, {32, -24, 385, -120, -32, 24}, {-24, 18, -120, 315, 24, -18}, {-64, 48, -32, 24, 433, -156}, {48, -36, 24, -18, -156, 342}}
PositiveDefiniteMatrixQ[A]
True
Then we check that matrix A is symmetric
Transpose[A] == A
True
Now we consider the linear system A x = b, where \[ \mathbf{b} = \begin{bmatrix} 89& 552& 643& 924& 1261& 1248 \end{bmatrix}^{\mathrm T} . \tag{20.2} \] This system has a unique solution \[ \mathbf{x}^{\ast} = \begin{bmatrix} 1& 2& 3& 4& 5& 6 \end{bmatrix}^{\mathrm T} . \tag{20.3} \] as Mathematica confirmed:
LinearSolve[A, b]
{1, 2, 3, 4, 5, 6}
Now we are ready to apply the conjugate gradient method for solving system of linear equations A x = b with matrix (20.1) and input data (20.2). WE start with an initial guess x(0) = 0 and make some preliminary calculations:
r0 = b; d0 = b;
Then we perform the first iteration.
a0 = (r0.r0)/(d0.A.d0)
21011/4993875
\[ \alpha_0 = \frac{21011}{4993875} \approx 0.00420735 . \] Then the first iterate becomes \[ \mathbf{x}^{(1)} = \mathbf{x}^{(0)} + \alpha_0 \mathbf{d}^{(0)} = \frac{1}{4993875} \begin{pmatrix} 1869979 \\ \frac{6715283688}{19} \\ 13510073 \\ 19414164 \\ 26494871 \\ 26221728 \end{pmatrix} \approx \begin{pmatrix} 0.374455\\ 2.32246\\ 2.70533\\ 3.8876\\ 5.30547\\ 5.25078 \end{pmatrix} . \] We calculate three more quantities \begin{align*} \mathbf{r}^{(1)} &= \mathbf{r}^{(0)} - \alpha_0 {\bf A}\,\mathbf{d}^{(0)} , \\ \beta_0 &= \frac{\left\langle \mathbf{r}^{(1)} \mid {\bf A}\,\mathbf{d}^{(0)} \right\rangle}{\left\langle \mathbf{d}^{(0)} \mid {\bf A}\,\mathbf{d}^{(0)} \right\rangle} = - \frac{\left\langle \mathbf{r}^{(1)} \mid \mathbf{r}^{(1)} \right\rangle}{\left\langle \mathbf{r}^{(0)} \mid \mathbf{r}^{(0)} \right\rangle} , \\ \mathbf{d}^{(1)} &= \mathbf{r}^{(1)} - \beta_0 \mathbf{d}^{(0)} . \end{align*}
r1 = r0 - a0*A . d0 ;
b0 = (r1 . A . d0)/(d0 . A . d0)
d1 = r1 - b0*d0
-(9535968/98523605)
In meantime, we check formula (8a):
(r1 . A . d0)/(d0 . A . d0) == (r1 . r1)/(r0 . r0)
True
For second iterative step, we have \begin{align*} \alpha_1 &= \frac{\left\langle \mathbf{r}^{(1)} \mid \mathbf{r}^{(1)} \right\rangle}{\left\langle \mathbf{r}^{(0)} \mid \mathbf{r}^{(0)} \right\rangle} , \\ \mathbf{x}^{(2)} &= \mathbf{x}^{(1)} + \alpha_1 \mathbf{d}^{(1)} , \\ \mathbf{r}^{(2)} &= \mathbf{r}^{(1)} - \alpha_1 {\bf A}\,\mathbf{d}^{(1)} , \\ \beta_1 &= -\frac{\left\langle \mathbf{r}^{(2)} \mid\mathbf{r}^{(2)} \right\rangle}{\left\langle \mathbf{r}^{(1)} \mid\mathbf{r}^{(1)} \right\rangle} , \\ \mathbf{d}^{(2)} &= \mathbf{r}^{(2)} - \beta_1 \mathbf{d}^{(1)} . \end{align*}
a1 = (r1 . r1)/(r0 . r0)
x2 = x1 + a1*d1
r2 = r1 - a1*A . d1
b1 = -(r2 . r2)/(r1 . r1)
d2 = r2 - b1*d1
-(28136796421847824829750833/7650870835986623997361)
The third step gives
a2 = (r2 . d2)/(d2 . A . d2)
(2200805603555589591984454663640396945002311849307123839911573/ 83600261540093778631314149453430150132168016646781154374469008620)
Then \[ \mathbf{x}^{(3)} = \mathbf{x}^{(2)} + \alpha_2 \mathbf{d}^{(2)} \]
   ■
End of Example 20

Precondition

When a square n=by-n matrix A is ill-conditioned, the conjugate gradient method is highly susceptible to rounding errors. So, although the exact answer should be obtained in n steps, this is not usually the case. As a direct method the conjugate gradient method is not as good as Gaussian elimination with pivoting. The main use of the conjugate gradient method is as an iterative method applied to a better-conditioned system. In this case an acceptable approximate solution is often obtained in about √n steps.

 

  1. Burden, R.L. and Faires, J.D., Numerical Analysis, nineth edition, Broks/Cole, Cengage, Boston, MA,
  2. Darve, E. and Wootters, M., Numerical Linear Algebra with Julia, SIAM, Philadelphia, 2021.
  3. Watkins, D.S., Fundamentals of Matrix Computations, Wiley; 3rd edition, 2010.
  4. White, R.E., Computational Linear Algebra with Applications and MATLAB Computations, CRC Press Boca Raton, FL, 2023. doi: 10.1201/9781003304128

 

  1. Barrett, R., Berry, M., Chan, T., Demmel, J., Donato, J., Dongarra, J., Eijkhout, V., Pozo, R., Romine, C., and van der Vorst, H., Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, SIAM, Philadelphia, Pennsylvania, 1993.
  2. Burden, R.L. and Faires, J.D., Numerical Analysis, nineth edition, Broks/Cole, Cengage, Boston, MA,
  3. Darve, E. and Wootters, M., Numerical Linear Algebra with Julia, SIAM, Philadelphia, 2021.
  4. Krylov, A. N. (1931). "О численном решении уравнения, которым в технических вопросах определяются частоты малых колебаний материальных систем" [On the Numerical Solution of Equation by Which are Determined in Technical Problems the Frequencies of Small Vibrations of Material Systems]. Izvestiia Akademii Nauk SSSR (in Russian). 7 (4): 491–539.
  5. Hestenes, Magnus R.; Stiefel, Eduard (December 1952). "Methods of Conjugate Gradients for Solving Linear Systems" (PDF). Journal of Research of the National Bureau of Standards. 49 (6): 409. doi:10.6028/jres.049.044
  6. Lanczos, C. (1950). "An iteration method for the solution of the eigenvalue problem of linear differential and integral operators" (PDF). Journal of Research of the National Bureau of Standards. 45 (4): 255–282. doi:10.6028/jres.045.026
  7. Loehr, N., Advanced Linear Algebra, CRC Press, 2014.
  8. Kelley, C.T., Iterative Methods for Optimization, SIAM, Philadelphia, 1999.
  9. Shewchuk, J.R., An Introduction to the Conjugate Gradient Method Without the Agonizing Pain, 1994, School of Computer Science, Carnegie Mellon University.
  10. <
  11. Shi, X., Wei, Y., and Zhang, W., Convergence of General Nonstationary Iterative Methods for Solving Singular Linear Equations, Taylor & Francis Online, Volume 38, Issue 11,
  12. /ol>