Preface


This section discusses a completely different strategy to solve an initial value problem for a single first-order differential equation\( y' = f(x,y) , \quad y(x_0 )= y_0 . \) In order to calculate an approximate value of the solutionφ(tn+1) at the next mesh pointt = tn+1, the values of the calculatedsolution at some previous mesh points are used. The numerical methodsthat use information at more than the last mesh point are referred to asmultistep methods. This section presents two types of multistepmethods: Adams methods and backward differentiation methods. Different levelsof accuracy can be achieved with each type of methods.

Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the course APMA0330
Return to the main page for the course APMA0340
Return to Part III of the course APMA0330

Multistep Methods


The methods of Euler, Heun, Runge--Kutta, and Taylor are called single-step methods (or one-step methods) because they use only the information from one previous point to compute the successive point; that is, only the initial point (x0, y0) is used to compute (x1, y1) and, in general, yk is needed to compute yk+1. After several points have been found, it is feasible to use several prior points in the calculation. methods that use information at more than the last mesh point are referred to as miltistep methods. More precisely, a method is called a multistep k-step method if the computation of the next approximate solution yn+1 is based on the last approximate solutions\( y_n, y_{n-1} , \ldots , y_{n-k+1} . \)

In this section, we discuss two types of multistep methods: Adams methods and backward differentiation methods. For simplicity, we assume throughout our exposition that the step length h is constant.

All multistep methods are not self-starting because some initial points are required to be determined. A desirable feature of a multistep method is that the local truncation error can be determined and a correction term can be included, which improved the accuracy of the answer at each step. Also, it is possible to determine whether the step size is small enough to obtain an accurate value for yn+1, yet large enough so that unnecessary and time-consuming calculations are eliminated. Using the combination of a predictor and corrector requires only two slope function evaluations compared to six evaluations required by RKF45. The predictor-corrector forms are among the mostefficient known integration methods in terms ofspeed and accuracy. As a class of integrationmethods, the multistep sets are among the best, butindividually as a predictor-corrector set, the choice forthe best method varies depending on the application.

 

I. Adams Method


To integrate the initial value problem \( y' = f(x,y) , \quad y(x_0 ) = y_0 , \) on the mesh interval \( \left[ x_n , x_{n+1} \right] , \) we rewrite the problem in integral form:
\[ y(x_{n+1}) = y(x_n ) + \int_{x_n}^{x_{n+1}} f(t, y(t)) \,{\text d}t . \]
The basic idea of an Adams method is to approximate f(t,y(t)) by a polynomial Pk(t) of degree k and to use the polynomial to evaluate the integral on the right side of the above integral equation. John Couch Adams (1819--1892), an English mathematician and astronomer, is most famous as codiscoverer, with Joseph Leverrier, of the planet Neptume in 1846. He was associated with Cambridge University for most of his life, as student (1839--1843), fellow, Lowdean Professor, and director of the Observatory. Adams was extremely skilled at computation; his procedure for numerical integration of differential equations appeared in 1883 in a book with Francis Bashforth on capillary action.

Francis Bashforth (1819--1912), English mathematician and Anglican priest, was a classmate of J.C. Adams at Cambridge. He was particularly interested in ballistic and invented the Bashforth chronograph for measuring the velocity of artillery projectiles.

The polynomial Pk(t) of degree k contains k+1 coefficients that are determined from previously calculated data points. For example, suppose that we wish to use a first degree polynomial \( P_1 (t) =at+b . \) Then we need only the two data points \( (x_n , y_n ) \quad\mbox{and} \quad (x_{n-1} , y_{n-1}) . \) For P1 to be an approximation to the slope function, we require that

\[ \begin{split} P_1 (x_n ) &= f\left( x_n , y_n \right) \qquad \Longrightarrow \qquad a\,x_n +b = f_n = f\left( x_n , y_n \right) , \\ P_1 (x_{n-1} ) &= f\left( x_{n-1} , y_{n-1} \right) \qquad \Longrightarrow \qquad a\,x_{n-1} +b = f_{n-1} = f\left( x_{n-1} , y_{n-1} \right) . \end{split}\]
Solving for a and b, we obtain
\[ a= \frac{1}{h} \left( f_n - f_{n-1} \right) , \qquad b = \frac{1}{h} \left( x_n f_{n-1} - x_{n-1} f_n \right) . \]
Replacing the slope function by the polynomial \( P_1 (t) =at+b , \) and evaluating integral, we obtain
\[ p_{n+1} = y_n + \frac{3}{2}\, h\, f_n - \frac{h}{2}\, f_{n-1} , \qquad n=1,2,\ldots . \]
The above explicit formula for \( y_{n+1} \) in terms of yn and yn-1 is called Adams--Bashforth. It has a local truncation error proportional to h3. However, this value pn+1 is used as a predicted value of yn+1 that approximates the actual solution \( y = \phi (x) \) at the mesh point xn+1. This value pn+1 is utilized in Adams--Moulton correction formula, which is our next topic. Forest Ray Moulton (1872--1952), an American astronomer and administrator of science, was for many years professor of astronomy at the University of Chicago. During World War I, he was in charge of the Ballistics Branch of the U.S. Army at Aberdeen (MD) Proving Ground. In the course of calculating ballistics trajectories he devised substantial improvements in the Adams formula.

Actually, the predictor formulas are based on Newton's backwarddifference interpolation formula:

\[ y_{n+m} = y_n + m\,\nabla + \frac{m(m+1)}{2} \, \nabla^2 y_n + \cdots + \frac{1}{n!} \,m^{\underline{n}} \,\nabla^n y_n , \]
where \( \nabla y_n = y_n - y_{n-1} \) is the backward finite difference and \( m^{\underline{n}} = m (m-1) \cdots (m-n+1) \) is the n-th falling factorial of m. Since the derivative of y is known, we obtain
\[ y_{n+1} = y_n + h \left[ 1 + \frac{1}{2}\, \nabla + \frac{5}{12} \, \nabla^2 + \frac{3}{8}\, \nabla^3 + \frac{251}{720} \, \nabla^4 + \cdots \right] y'_n , \qquad n=1,2,\ldots . \]
We can derive Nyström midpoint predictor formula:
\[ p_{n+1} = y_{n-1} + 2h \, f_n \qquad n=1,2,\ldots , \]
followed by the corrector step:
\[ y_{n+1} = y_{n-1} + \frac{h}{2} \left[ f \left( x_n , y_n \right) + f \left( x_{n+1} , p_{n+1} \right) \right] , \qquad n=1,2,\ldots , \]
It is quite interesting to see that although theEuler and Nyström methods have the widest range of realnegative stability, their results are not quite as accurateas the results of the Hamming and the fourth order Adams,even the third order Adams methods.

 

II. Backward Differentiation Formula


Another type of multistep method uses a polynomial Pk(t) of degree k to approximate the actual solution \( y = \phi (x) \) of the considered initial value problem rather than its derivative \( y' = \phi' (x) , \) as in the Adams method. We then differentiate Pk(t) and set \( P'_k (x_{n+1}) \) equal to \( f \left( x_{n+1} , y_{n+1} \right) \) to obtain an implicit formula for yn+1. These algorithms are called backward differentiation formulas. These methods became widely used in the 1970s because of the work of C. William Gear (born in 1935, London, UK) on so called stiff differential equations, whose solutions are very difficult to approximate by discussed so far methods. A British-American mathematician C.W. Gear is well-known for his contributions in numerical analysis and computer science.

The simplest case uses a first degree polynomial \( P_1 (t) =at+b . \) The coefficients are chosen to match the computed values of the solution yn and pn+1

, from predictor stage. Hence, a and b must satisfy
\[ \begin{split} P_1 (x_n ) &= f\left( x_n , y_n \right) \qquad \Longrightarrow \qquad a\,x_n +b = f_n = f\left( x_n , y_n \right) , \\ P_1 (x_{n+1} ) &= f\left( x_{n+1} , y_{n+1} \right) \qquad \Longrightarrow \qquad a\,x_{n+1} +b = f_{n+1} = f\left( x_{n+1} , p_{n+1} \right) . \end{split}\]
Since \( P'_1 (t) =a , \) the requirement that
\[ P'_{1} (x_{n+1}) = f_{n+1} = f \left( x_{n+1} , p_{n+1} \right) \]
is just \( a = f \left( x_{n+1} , p_{n+1} \right) . \) Another expression for a comes from subtracting the first equation from the second, which gives
\[ a= \left( y_{n+1} - y_n \right) /h .\]
Using these values of a, we get the first order backward differentiating formula
\[ y_{n+1} = y_n + h\, f\left( x_{n+1} , p_{n+1} \right) , \]
which is just the backward Euler formula.

 

III. Predictor-corrector Formula


If the Adams formula is used as a precition, which is then used in backwardformula, we arrive at the predictor-corrector formula, credited to Nyström:
\[ \begin{split} p_{n+1} &= y_{n-1} + 2h\, f(x_n , y_n ) , \\ y_{n+1} &= y_n + \frac{h}{2}\left[ f\left( x_{n+1} , p_{n+1} \right) + f\left( x_{n} , y_{n} \right) \right] . \end{split} \]
The Nyström formula is based on the relation
\[ y(x_{n+1}) = y(y_{n-1}) + \int_{x_{n-1}}^{x_{n+1}} f(t, y(t))\,{\text d}t . \]

By using higher order polynomials and corresponding more data points, we can obtain backward differentiation formulas of any order. The second order Adams-Moulton formula is

\[ y_{n+1} = \frac{1}{3} \left[ 4\,y_n - y_{n-1} + 2\, h\, f\left( x_{n+1} , p_{n+1} \right) \right] , \qquad n= 1,2, \ldots . \]

 

 

Adomian Decomposition Method

Modified Decomposition Method

 

 

Return to Mathematica page
Return to the main page (APMA0330)
Return to the Part 1 (Plotting)
Return to the Part 2 (First Order ODEs)
Return to the Part 3 (Numerical Methods)
Return to the Part 4 (Second and Higher Order ODEs)
Return to the Part 5 (Series and Recurrences)
Return to the Part 6 (Laplace Transform)
Return to the Part 7 (Boundary Value Problems)