This chapter is devoted to several iterative methods for solving the large sparse linear systems
Iterative methods are suited for use with sparse matrices. If A is dense, your best course of action is probably to factor A and solve the equation by backsubstitution. The time spent factoring a dense A is roughly equivalent to the time spent solving the system iteratively; and once A is factored, the system can be backsolved quickly for multiple values of b. Compare this dense matrix with a sparse matrix of larger size that fills the same amount of memory. The triangular factors of a sparse A usually have many more nonzero elements than A itself. Factoring may be impossible due to limited memory, and will be time-consuming as well; even the backsolving step may be slower than iterative solution. On the other hand, most iterative methods are memory-efficient and run quickly with sparse matrices.
Except when the matrix A in Eq.\eqref{EqPart4.1} has very special structure and fast direct methods apply, iterative methods are usually the method of choice for large sparse linear systems. All iterative methods can be divided into two categories: stationary methods and nonstationary iterative methods. Since the linear matrix/vector equarion \eqref{EqPart4.1} is not suitable for stationary iterative schemes, it requires preconditioning operation that transforms the given problem into fixed point problem
Stationary Iterative Methods
Theorem 1: Given vector b and splitting A = S + (A − S) of matrix A, with A and S nonsingular, the iteration scheme (4) converges if and only if the spectral radius \[ \rho \left( {\bf S}^{-1} {\bf M} \right) < 1 . \] Here ρ(X) is the largest modulus of eigenvalues of matrix X.
Theorem 2: If H is a square matrix, then Hkx → 0 as k → ∞ for every vector x ∈ ℂn×1 if and only if each eigenvalue λ of H is less than one in modulus.
Every square matrix H may be changed to a blockwise diagonal matrix S-1H S (the Jordan canonical form of H), each of whose diagonal blocks has the form \[ \begin{bmatrix} \lambda_i & 1 & 0 & \cdots & 0 \\ 0&\lambda_i & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0&0&0& \cdots & \lambda_i\end{bmatrix} . \] Then we have only to prove the theorem for an arbitrary block of the form above.
- Brigs, W.L., Multigrid Tutorial,
- Burden, R.L. and Faires, J.D., Numerical Analysis, nineth edition, Broks/Cole, Cengage, Boston, MA,
- Darve, E. and Wootters, M., Numerical Linear Algebra with Julia, SIAM, Philadelphia, 2021.
- Forsythe, G.E. and Wasow, W.R., Finite Difference Methods for Partial Differential Equations. New York: John Wiley & Sons, Inc., 1960.
- Loehr, N., Advanced Linear Algebra, CRC Press, 2014.
- Watkins, D.S., Fundamentals of Matrix Computations, Wiley; 3rd edition, 2010.
- White, R.E., Computational Linear Algebra with Applications and MATLAB Computations, CRC Press Boca Raton, FL, 2023. doi: 10.1201/9781003304128