This secton demonstrates applications of Picard's iteration scheme for power
series determination of solutions to the initial value problems for second and higher order differential equations.
Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the course APMA0330
Return to the main page for the course APMA0340
Return to Part V of the course APMA0330
Picard's theorem on the existence and uniqueness of solutions of first order differential equations was discussed previously in the introductory section. Let us outline the main ideas in Picard's iteration procedure, starting with the first order differential equation
Here \( y' = {\text d}y/{\text d}x \) is the derivative of function y(x). It is convenient to use the derivative operator \( \texttt{D} = {\text d}/{\text d}x , \) and rewrite Eq.\eqref{EqPicard.1} in operator form
The iteration procedure is very basic mathematical technique that conceptually sets the tone for a large class of analytical and numerical methods. We are going to apply this scheme for determination of series solutions for wide class of differential equations.
Since the differential equation contains the unbounded derivative operator, it is hard to expect that any iteration procedure applied to the differential equation \eqref{EqPicard.1} will lead to an acceptable solution. Therefore, the first step is to get rid of this unbounded operator by integration:
\begin{equation} \label{EqPicard.2}
y(x) = y\left( x_0 \right) + \int_{x_0}^x f\left( s,y(s) \right)\,{\text d} s .
\end{equation}
There are some advantages in converting the given initial value problem \eqref{EqPicard.1} into an equivalent problem without derivatives. You can recognize in the right-hand side integral the inverse to the derivative operator \( \texttt{D} = {\text d}/{\text d}x , \) acting in the space of smooth function vanishing at x0. So adding y(x0) is needed to satisfy the initial condition. Next, the form of Eq.\eqref{EqPicard.2} tells us that we can apply a fixed point theorem to the equation y = Φ(y) for a bounded integral operator \( \Phi (y) = \int_{x_0}^x f\left( s,y(s) \right)\,{\text d} s . \)
Eq.\eqref{EqPicard.2} is known as the Volterra integral equation of the second kind, which is suitable for the iteration procedure:
\begin{equation} \label{EqPicard.3}
\phi_{m+1} (x) = y\left( x_0 \right) + \int_{x_0}^x f\left( s,\phi_m (s) \right)\,{\text d} s , \qquad m=0,1,2,\ldots , \qquad \phi_0 = y\left( x_0 \right) .
\end{equation}
When the slope function f(x, y) is a Lipschitz continuous function with respect to variable y, the Picard's iteration \eqref{EqPicard.3} converges uniformly to a unique solution of the initial value problem \eqref{EqPicard.1} on some interval containing the initial point x0.
Picard's iteration scheme can be implemented in Mathematica in many ways. Some of them are presented below.
Clear[picardSeries, iterate];
iterate[initial_, flow_, psi_, n_, t_] :=
initial +
Expand@Apply[Subtract,
Assuming[{Subscript[t, n + 1] > 0},
Integrate[flow[Subscript[t, n + 1], psi],
Subscript[t, n + 1]]] /. {{Subscript[t, n + 1] ->
Subscript[t, n]}, {Subscript[t, n + 1] -> 0}}]
The iteration step is called iterate, and it keeps track of the iteration order n so that it can assign a separate integration variable name at each step. As additional tricks to speed things up, I avoid the automatic simplifications for definite integrals by doing the integral as an indefinite one first, then using Subtract to apply the integration limits.
Instead of a generic simplification, I add Expand to the result in order to help the subsequent integration step recognize how to split up the integral over the current result. The last argument in iterate is the name that I intend to use as the base variable name for the integration, but it will get a subscript n attached to it.
Then iterate is called by picardSeries, using increasing subscripts starting with the outermost integration. This is implemented using Fold with a Reverse range of indices. The first input is the initial condition, followed by the function defining the flow (specifically, defined below as flow[t_, f_]). After the order n, I also specify the name of the independent variable var as the last argument.
Here is the code from the following example involving the Riccati equation:
This solution works equally well for vector initial value problems, i.e., the flow can be a vector function and the initial condition a vector. It can be extended for higher order differential equations.
Example:
Let us consider the initial value problem
Application of Picard's iteration schemeyields a sequence of functions
\begin{align*}
\phi_1 (x) &= 1 + \frac{x^3}{3} + \int_0^x 1^2 {\text d} s = 1 + x + \frac{x^3}{3} ,
\\
\phi_2 (x) &= 1 + \frac{x^3}{3} + \int_0^x \phi_1^2 {\text d} s = 1 + x + x^2 + \frac{2\,x^3}{3} + \frac{x^4}{6} + \frac{2\,x^5}{15} + \frac{x^7}{63} ,
\\
\phi_3 (x) &= 1 + \frac{x^3}{3} + \int_0^x \phi_2^2 {\text d} s = 1 + x + x^2 + \frac{4\,x^3}{3} + \frac{5\,x^4}{6} + \frac{8\,x^5}{15} + \frac{29\,x^6}{90} + \frac{47\,x^7}{315}
\\
& \quad + \frac{41\,x^8}{630} + \frac{299\,x^9}{11340} + \frac{4\,x^{10}}{525} + \frac{184\,x^{11}}{51975} + \frac{x^{12}}{2268} + \frac{4\,x^{13}}{12285} + \frac{x^{15}}{59535} .
\end{align*}
The next iteration requires utilization of a computer algebra system because its manual integration will take too much efforts. Of course, mathematica can handle it, but analytical expressions will grow as a snow ball. We compare the last approximation with the true expansion:
So we observe that the n-th iterative term gives a correct n-th degree polynomial accompanied by the noise terms of higher degree. With every iterative step, we obtain a better polynomial approximation that suffers by inaccurate higher degree noise polynomial. Now we compare Picard's approximation with the true solution.
and so on. Every iteration contains some correct terms and noise terms of higher degree. This is expected because the given Abel equation is a variable coefficient equation.
Since every iteration step in \eqref{EqPicard.3} requires integration, it can be efficiently performed only when the slope function is a polynomial. Therefore, the general effectiveness of Picard’s method fails
apparently even for relatively simple problems such as the pendulum equation. This criticisms can be overcomed by applying a well-known transformation technique or polynomial approximations (as shown by L.N. Trefethen using Chebyshev polynomials).
Second order differential equations
In this subsection, we specifically consider
second order ordinary differential equations (ODEs for short), i.e., equations of the form
The theory of second order ODEs is an extensive and highly developed part of mathematics, not least because of its numerous applications in the physical sciences. For example, Newton told us, with impressive foresight, that the radial motion of the space shuttle is governed by the equation
None of these equations is suitable to practical Picard's iteration because they involve nonpolynomial functions. Therefore, they can be solved with Picard's method only upon transformations into polynomial forms---it will be shown in the second part of the tutorial.
Integration twice the differential equation \eqref{EqPicard.4}, we obtain
Sometimes (at least in case of linear equations) it is possible to integrate by parts and eliminate the derivative from the sign of the integral in Eq.\eqref{EqPicard.5}.
Next, we illustrate the iteration procedure on several examples.
Example:
Consider the case of the Airy
equation that occurs in diffraction theory
with the initial conditions left undetermined. It is convenient to use some constants A and B that can be set to particular numerical values later. In particular, choosing A = 1, B = 0 and calling this case as "Dirichlet", and then A = 0, B = 1 and calling this case as "Neumann", we obtain two linearly independent solutions to the given Airy equation. The Picard's iteration scheme simply carries the unknown constants, A and B, through the calculation. Its solution is known and expressed through Airy functions:
where \( \Gamma (2/3) = \int_0^{\infty} t^{-1/3} e^{-t} {\text d}t \approx 1.35412 , \)
and Ai(x) and Bi(x) are the Airy function of the first kind and second kind, respectively; they can be defined by the improper Riemann integrals:
aa + bb x - (4 aa x^3)/3 - (2 bb x^4)/3 + (16 aa x^6)/45 + (
8 bb x^7)/63 - (16 aa x^9)/405 - (32 bb x^10)/2835 + (
32 aa x^12)/13365 + (64 bb x^13)/110565 - (128 aa x^15)/1403325 - (
32 bb x^16)/1658475
Now we perform Picard's iterations. For the Airy equation with "Dirichlet" initial conditions (A = 1 and B = 0),
the iteration scheme \eqref{EqPicard.5} takes the form
Each term from the recursive scheme above satisfies the initial conditions
\( \displaystyle \phi_m (0) = 1, \quad \phi'_m (0) = 0 . \) Using Mathematica, we find first few terms:
and so on. As we see, each iteration brings use closer and closer to the twu polynomial expansion without noise terms. This is typical when the differential equation is homogeneous. When the equation is inhomogeneous, as in the previous example for the Riccati equation, noise terms are usually present.
that is not suitable for Picard's iteration because it leads to non-polynomial integrands.
Now we perform a few iterations according to the algorithm (A). We split it into two parts: first we set y(0) = 1, y'(0) = 0, and then repeat with another initial conditions.
This second order differential equation is named after the American astrophysicist Jonathan Homer Lane (1819--1880) and the Swiss astrophysicist and meteorologist
Jacob Robert Emden (1862--1940). It is a so called the singular differential equation having a critical point at t = 0. The solutions of the Lane--Emden equation which are finite at the origin have necessarily dy/dt = 0 at t = 0. The Lane--Emden operator admits a factorizations:
where the homogeneous Lane--Emden equation is known to have an explicit solution for n = 0, 1, 5. For his equation, we don't have uniqueness and existence theorems, and there could be singular solutions. In particular, an explicit solution for n = 5 is
where ··· denote the noise terms of order O(t8. So with three iterations, we obtaine an accurate 6-degree polynomial approximation of the actual solution.
■
N-th order differential equations
Consider a n-th order (nonlinear) differential equation subject to the initial conditions:
As usual, \( y^{(k)} = \texttt{D}^k y = {\text d}^k / {\text d} x^k \) is the k-th derivative of the function y(x), with \( y' = \texttt{D}\, y = {\text d} / {\text d} x \) and \( y'' = \texttt{D}^2 y = {\text d}^2 / {\text d} x^2 . \) It is assumed that the given initial value problem \eqref{EqPicard.n1} has a unique solution for given n constants α0, α1, … , αn-1.
We integrate Eq.\eqref{EqPicard.n1} repeatedly, n times, to get
the initial polynomial because it satisfies the given initial conditions in \eqref{EqPicard.n1}.
Equation \eqref{EqPicard.n2} is solved now by iteration, upon choosing a suitable starting function
\[
\phi_0 = p_0 (t)
\]
that satisfies the initial conditions and then substituting φ0 into the right hand
side of \eqref{EqPicard.n1}, i.e. \eqref{EqPicard.n2}, which can be integrated to yield ϕ1. Repeating the iterative process, we get the general formula to obtain ϕm+1
from ϕm
The iteration process \eqref{EqPicard.n3} is terminated when the required degree of accuracy is
attained.
Example:
The Blasius equation of boundary layer flow is a third-order nonlinear differential equation:
\[
2\,f''' + f\,f'' = 0, \qquad f(0) = f' (0) =0, \quad f'' (0) = b ,
\]
where
\[
b \approx 0.332057336215196298937180062010
\]
is the Blasius constant that assures the behavior of the first derivative at infinity:
\( \lim_{\eta \to \infty} f' (\eta ) = 1. \) This constant was calculated by V.P.Varin in 2013. Now we convert the initial value problem for the Blasius equation into the integral equation
\[
y(x) = b\,\frac{x^2}{2} - \frac{1}{2} \int_0^x \frac{\left( x - s \right)^2}{2!} \,f(s)\,f'' (s)\,{\text d}s .
\tag{A}
\]
Therefore, we conclude that both algorithms give the identical outputs. Now we compare Picard's polynomial approximations with true Maclaurin expansion.
Robin, W.A., Solving differential equations using modified Picard iteration,
International Journal of Mathematical Education in Science and Technology, 2010, Volume 41, Issue 5, pp. 649--665. https://doi.org/10.1080/00207391003675182
Yu, Lien-Tsai and Chen, Cha'o-Kuang, The solution of the Blasius equation by the differential transformation method, Mathematical and Computer Modelling, 1998,
Volume 28, Issue 1, July 1998, Pages 101-111. https://doi.org/10.1016/S0895-7177(98)00085-5
Return to Mathematica page
Return to the main page (APMA0330)
Return to the Part 1 (Plotting)
Return to the Part 2 (First Order ODEs)
Return to the Part 3 (Numerical Methods)
Return to the Part 4 (Second and Higher Order ODEs)
Return to the Part 5 (Series and Recurrences)
Return to the Part 6 (Laplace Transform)
Return to the Part 7 (Boundary Value Problems)