Its realization involves splitting the main matrix A into two parts,
A = S + (A − S), where S must be invertible square matrix. Upon multiplication both sides of the given equation by S−1, we can reduce the given problem to a fixed point problem:
which is suitable for sequential iterations. However, stationary iterative methods utilize the same function g(x) for every iteration loosing all
information gathered throughout the iteration. Now we are going to use function
g(x) that depends on every iteration, so g(x) =
gk(x) for every step k.
In the method of steepest descent, also known as the gradient method, we choose direction vectors as residue vectors r(k) = b − Ax(k), the negative gradient, and the optimal αk. Starting from x(0), we
compute for k = 0, 1, 2, …,
where vector pk is the search direction and the scalar αk is the step size. Note that this formula includes
basic stationary methods, since such methods operate with α ≡ 1 and vector pk does not depend on k.
The simplest nonstationary scheme based on setting a varying search direction and step size is obtained by setting p k = r k , i.e., M k −1 = α k I , with I the identity matrix. The resulting family of
methods is called gradient descent.
Using splitting with nonsingular matrix S, we rewrite the main equation A x = b as
where vector pk is the search direction and the scalar αk is the step size. Note that this formula includes
basic stationary methods, since such methods operate with α ≡ 1 and vector pk does not depend on k.
The simplest nonstationary scheme based on setting a varying search direction and step size is obtained by setting p k = r k , i.e., M k −1 = α k I , with I the identity matrix. The resulting family of
methods is called gradient descent.
Using splitting with nonsingular matrix S, we rewrite the main equation A x = b as