Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the first course APMA0330
Return to the main page for the second course APMA0340
Introduction to Linear Algebra with Mathematica
Historically, Picard's iteration scheme was the first method to solve analytically nonlinear differential equations, and it was discussed in the first part of the course (see introductory section xv Picard).
In this section, we widen this procedure for systems of first order
differential equations written in normal form \(
\dot{\bf x} = {\bf f}({\bf x}, t) . \) Although this method is rarely
used for actual evaluations due to slow convergence and obstacles with
performing explicit integration, it is very important for understanding
material. Working with Picard's iterations and its refinements helps everyone
to develop computational skills. This section also gives some motivated examples about transferring differential equations with complicated slope functions to systems of ODEs with polynomial driven terms that are suitable for Picard's iteration.
This section expands Picard's iteration process to
systems of ordinary differential equations in normal form (when the derivative is isolated). Picard's iterations for a single
differential equation \( {\text d}x/{\text d}t = f(t,x)
\) was considered in detail in the
first tutorial (see
section for reference). Therefore, our main interest would be applying Picard's iteration to systems of first order ordinary differential equations in normal form (which means that the derivative is isolated)
This explicit notation is not concise and is difficult to work with. Hence, we introduce n-dimensional column-vectors of unknown variables and the slope function in order to rewrite the system above in compact form. So
the system of differential equations can be written in vector form as
are n-column vectors. Note that the preceding system of equations contains
n dependent variables \( x_1 (t), x_2 (t) , \ldots ,
x_n (t) \) while the independent variable is denoted by t,
which may be associated with time. In engineering and physics, it is a custom
to follow Isaac Newton and denote a derivative with respect to time variable t by dot:
\( \dot{\bf x} = {\text d}{\bf x} / {\text d} t. \)
When the input function f satisfies some general conditions (usually
required to be Lipschitz
continuity in the dependent variable \( {\bf x} \) ), then
Picard's iteration converges to a solution of the Volterra integral equation
in some small neighborhood of the initial point. Thus, Picard's iteration is
an essential part in proving the existence of solutions for initial value
problems.
Note that Picard's iteration is not suitable for numerical
calculations. The reason is not only slow convergence, but
mostly it is impossible, in general, to perform explicit integration to
obtain the
next iteration. Although in case of a polynomial input function, integration
can be performed explicitly (especially, with the aid of a computer algebra
system), and the number of terms quickly grows as a snow ball. While we know that the resulting series converges eventually to the true solution, its range of convergence is too small to keep many
iterations. In another section, we show how
to bypass this obstacle.
If an initial position of the vector \( {\bf x} (t) \)
is known, we get an initial value problem:
where \( {\bf x}_0 \) is an initial column vector.
Many brilliant mathematicians participated in proving the existence of a
solution to the given initial value problem more than 100 years ago. Their
proof was based on what is now called the Picard's iteration, named after the
French mathematician Charles Émile Picard (1856--1941) whose theories did much to advance
research in analysis, algebraic geometry, and mechanics.
The Picard procedure is actually a practical extension of the Banach fixed point theorem, which is applicable to continuous contractible functions. Since any differential equation involves an unbounded derivative operator, the fixed point theorem is not suitable for it. To bypass this obstacle, Picard suggested to apply the (bounded) inverse operator L-1 to the first derivative \( \texttt{D} . \) Recall that the inverse \( \texttt{D}^{-1} , \) called in mathematical literature as «antiderivative», is not an operator because it assigns to every input-function infinite many outputs. To restrict its output to a single one, we consider the differential operator on the set of functions (which becomes a vector space only when the differential equation and the initial condition are all homogeneous) with a specified initial condition f(x0) = y0. So the derivative operator on this set of functions we denote by L, and its inverse is a bounded integral operator.
The first step in deriving Picard's iterations is to rewrite the
given initial value problem in equivalent (this is true when the slope function f is continuous in Lipschitz sense) form as Volterra integral equation of second kind:
This integral equation is obtained upon integration of both sides of the
differential equation \(
{\text d}{\bf x} / {\text d} t = {\bf f}(t, {\bf x}), \) subject to the initial condition.
The equivalence follows from the Fundamental Theorem of Calculus. It suffices to find a continuous function \( {\bf x}(t) \)
that satisfies the integral equation within the interval \( t_0 -h < t < t_0 +h , \) for
some small value \( h \) since the right hand-side (integral) will be continuously differentiable in \( t . \)
Assuming that the vector-function
\( {\bf f} ({\bf x}, t) \) satisfies the Lipschitz condition in x:
where constant L is called the Lipschitz constant, then
the initial value problem \eqref{EqPicard.1} has a unique solution in some neighborhood of the initial position.
Now we apply a technique known as Picard iteration to construct the required solution:
The initial approximation is chosen to be the initial value
(constant): \( {\bf x}_0 (t) \equiv {\bf x}_0
.\) (The sign ≡ indicates the values are identically
equivalent, so this function is the constant).
When input function f(t, x) satisfies some (sufficient)
conditions, it can be shown that this iterative sequence converges to
the true solution uniformly
on the interval \( [t_0 -h, t_0 +h ] \) when
h is small enough.
Actually, the necessary and sufficient conditions for existence of a solution
to a vector initial value problem are still waiting for their discovery.
Of course, we can differentiate the recursive integral relation to obtain a sequence of the initial value problems
that is equivalent to the original recurrence \eqref{EqPicard.3}. However,this recurrence relation \eqref{EqPicard.4} involves the derivative operator
\( \texttt{D} = {\text d}/{\text d}t . \) Since \( \texttt{D} \) is an unbounded operator, it is not possible to prove convergence of the iteration procedure \eqref{EqPicard.4}. On the other hand, the convergence of \eqref{EqPicard.3} involving a bounded integral operator can be accomplished directly (see Tutorial I).
Example 1: .
Using Newton's laws of motion and gravitation you can in theory work out how a number of objects (say stars, planets and moons) should move due to the gravitational pull they exert on each other. If there are only two bodies involved the answer is straight-forward: they move along curves that are easy to describe (ellipses in the case of a planet orbiting a star).
The problem of three or more bodies
motivated the science community and Henri Poincaré solved this three body problem on November 30th 1889, unfortunately, withe a mistake. He revised his competition entry, still got the prize and was paid, and later wrote more extensively about what the mistake had revealed to him---the existence of chaos.
Let xij and vij represent the i-th position coordinate and the i-th velocity component, respectively, of particles j = 1, 2, …, N of respective masses mj. Furthermore, let \( r_{jk} = \left\vert \sum_{i=1}^3 \left( x_{ij} - x_{ik} \right)^2 \right\vert^{1/2} \) denote the separation distances between the centers of particles j and k, where k = 1, 2, …, N. In classical form, the N-body problem consists of 6N coupled autonomous first-order ODEs, namely
\begin{equation*} \label{EqNbody.1}
\begin{split}
\frac{{\text d} x_{ij}}{{\text d}t} = v_{ij} ,
\\
\frac{{\text d} v_{ij}}{{\text d}t} = \sum_{k\ne j} m_k \left( x_{ik} - x_{ij} \right) / r_{jk}^3 .
\end{split} \tag{1.1}
\end{equation*}
By defining the inverse separation distances to be \( s_{jk} = 1 / r_{jk} \ (j\ne k) , \) we derive the following polynomial
formulation of the N-body problem:
\begin{equation*} \label{EqNbody.2}
\begin{cases}
\frac{{\text d} x_{ij}}{{\text d}t} = v_{ij} ,
\\
\frac{{\text d} v_{ij}}{{\text d}t} = \sum_{k\ne j} m_k \left( x_{ik} - x_{ij} \right) * s_{jk}^3 ,
\\
\frac{{\text d} s_{ij}}{{\text d}t} = - s_{jk}^3 \sum_{i=1}^3 \left( x_{ik} - x_{ij} \right) \left( v_{ik} - v_{ij} \right) .
\end{cases} \tag{1.2}
\end{equation*}
Even when symmetry is exploited ( \( s_{jk} = s_{kj} \) ), we have paid a price for polynomial form; the number of equations n has been increased to n = 6N +N(N - 1)/2. The gain is that the system now readily admits a Maclaurin-series solution of arbitrary order.
Let xijl denote the l-th order coefficient of the Maclaurin series for xij. That is,
\( x_{ij}(t) = \sum_{l\ge 0} x_{ijl} t^l . \)
Let similar notation hold for vij and sij.
In light of the theoretical considerations mentioned previously, the coefficients for the m-th term are obtained simply by integrating (1.2) with respect to t, to yield
\begin{equation*} \label{EqNbody.3}
x_{ijm} = v_{i,j,m-1}/m .
\tag{1.3}
\end{equation*}
Similarly, we have
\begin{equation*} \label{EqNbody.4}
v_{ijm} = \sum_{k=1}^n m_k \sum_{l=0}^{m-1} \left( x_{pkl} - x_{ijl} \right) \left( s^3 \right)_{j,k,m-l-1} /m , \tag{1.4}
\end{equation*}
where
\begin{equation*} \label{EqNbody.5}
\left( s^2 \right)_{jkm} = \sum_{l=0}^m s_{jkl} s_{j,k,l-m} , \qquad
\left( s^3 \right)_{jkm} = \sum_{l=0}^m \left( s^2 \right)_{ijk} s_{j,k,l-m}
\tag{1.5}
\end{equation*}
are convolutions of sequences s with itself.
Finally, by defining
\begin{equation*} \label{EqNbody.6}
a_{jkm} = \sum_{l=1}^m \sum_{i=1}^3 \left( x_{ijl} - x_{ikl} \right) \left( v_{i,j,m-l} - v_{i,k,m-l} \right) ,
\tag{1.6}
\end{equation*}
we obtain from
\begin{equation*} \label{EqNbody.7}
s_{jkm} = - \sum_{l=1}^{m-1} \left( s^3 \right)_{jkl} a_{j,k,m-l} /m .
\tag{1.7}
\end{equation*}
The three-body problem with close encounters is notoriously ill-conditioned because it admits chaotic solutions that manifest extreme sensitivity to initial conditions. The following example demonstrates the Picard's method for an ill-conditioned three-body problem comprised of the earth, its moon,and a ‘‘spacecraft’’.
The numerical values of three body problem are presented in the following table. Data are only approximately related to realistic values. For all bodies, the initial coordinate x3 and velocity v3 are assumed zero so that the orbits remain co-planar. Masses are scaled relative to the mass of the earth; distances are in earth radii. Velocity is normalized by the orbital velocity at the earth's surface, for which the orbital period is 2π. The integration time (T = 3200) is in excess of 500 orbital periods.
Picard's iteration for higher order differential equations
Now we turn our attention to applying Picard's iteration procedure to
solving initial value problems for higher order (linear or nonlinear) single
differential equations:
where \( x^{(n)} (t) = \frac{{\text d}^n x}{{\text d}t^n}
= \texttt{D}^n x(t) \) stands for the n-th derivative. There
are usually two options to achieve this. First, we can transfer this single differential equation to an equivalent system of first order ODEs upon introducing new dependent variables:
Rewriting this system of equations in a standard vector form
\( \dot{\bf y} = {\bf f}(t, {\bf y}) , \) we can apply Picard's iterations for column-vector y.
Another approach is based on extension of Picard's idea for a higher order
equation. It is based on the observation that the initial value problem for
n-th order derivative
where coefficients \( a_k \) and first n initial values of y and its derivatives at t = 0 to satisfy the initial conditions \eqref{EqPicard.5}.
Since the convergence of the successive approximations to the true solution is
often slow, it could be beneficial to choose the initial approximation wisely.
All approaches are demonstrated in a series of examples.
Example 2:
Consider the initial value problem for the first order differential equation
If we start with the initial condition ϕ0 = 1, we almost immediately come to a problem of explicit integration. Indeed, we calculate first the few terms:
This example illustrates that Picard's scheme is not practical because the corresponding iteration cannot be performed indefinitely---the functions are not explicitly integrable. In order to apply Picard's method, the slope function must be a polynomial---otherwise the results of integration cannot be expressed through elementary functions. However, in many practical cases, this obstacle can be bypassed with introduction of auxiliary dependent variables and converting the given equation to a system of differential equations.
To apply Picard's method, we convert (see section) the given initial value problem to the system of differential equations upon introducing auxiliary dependent variables:
The second component, ϕ2, remains the same with
every iteration. This component is used to generate the first
component, ϕ1, as well. As such, every following iteration will be the
same, and the true solution becomes
where dot stands for the derivative with respect to time
variable t.
We integrate the pendulum equation above twice, and use initial condition
provided. We may try to apply Picard's iteration scheme
the second term ϕ2(t) is impossible to determine, even for such a powerful computer algebra system as Mathematica. We can bypass this obstacle by introducing auxiliary variables:
To start iteration, we have to choose the
initial value \( {\bf \phi}_0 = {\bf x}_0 . \) Otherwise, the integral equation will be not equivalent to the given initial value problem. Then
next iteration will be
E. Picard proved that the sequence of n-column vector functions ϕm(t) converges to the solution x>(t) of the given IVP as m → ∞. From the explicit formula for ϕm(t), it follows that it is the m-th approximation of the exponential matrix function:
As we see, single equation iteration is faster than the previous. We can compare our single equation iteration with Picard's scheme for the original system of first order equations:
Based on the single differential equation
\( \ddot{z} + 11\,\dot{z} + 10\, z =0, \) we
transfer it to the system of first order equations using standard substitutions:
\[
\dot{z} = u, \qquad \dot{u} = -11\, u -10\, z .
\]
Example 6:
The objective of this example is to demonstrate that Picard's method generates noise terms along with true approximations when applied to a differential equation with variable coefficients. Of course, these noise terms are eliminated with next iteration, but new and less significant noise terms appear.
where erf(x) = \( \frac{2}{\sqrt{\pi}} \, \int_0^x
e^{-t^2 /2} \,{\text d}t \) is the error function.
Before applying Picard's iteration scheme, we find its Maclaurin approximation:
We need this expansion to check the quality of Picard's iterations. This is where Mathematica is extremely helpful; to achieve this, you have two options: use either Series or AsymptoticDSolveValue command. We apply the latter:
The given differential equation has a Maclaurin power series solution that converges everywhere in ℂ because its coefficients are holomorphic functions:
\[
y(x) = 1 - x + \sum_{n\ge 2} c_n x^n .
\]
The linear term 1−x is chosen to satisfy the initial conditions, and coefficients cn are to be determined from the equation. Its derivatives are
First, we transfer the given problem to the system of first order differential
equations by introducing new variables: y1 =
y(x) and y2 to be its derivative. Then we get
The third approximation ϕ3 has noise terms of order four. Of course, they will be eliminated with next iterations, but new ones of higher degree appear with every iterative step. This is typical for Picard's iterations because the given differential equation has variable coefficients. Noise terms are not observed only for autonomous differential equations.
Now we calculate some first iterations directly from the second order differential equation without converting it to a system of first order equations. Integrating twice both sides of the given differential equation, we obtain
\[
\phi_{(m+1)} (x) = 1-x + \int_0^x \left( x- s \right) \left[ 2\,\phi_{(m)} (s) - 2s\,\phi'_{(m)} (s) \right] {\text d}s , \qquad m=0,1,2,\ldots .
\]
We can simplify this scheme further by eliminating the derivative of the unknown function under the sign of integral.
Integrating by parts the term containing the derivative of ϕm, we reduce Picard's scheme to the following
Note that Picard's iteration applied to the given second order differential equation (6.1) provides a better and more accurate approximation than approximation obtained from the first order system of equations (6.4). However, all these Picard's approximation contain noise terms caused by the differential equation with variable coefficients. Recall that Picard's approximations applied to autonomous equations do not contain noise terms and provide direct approximations to the true solution.
 
Now we plot the true solution along with third and sixth Picard's approximations.
where d = 1 is the initial displacement and v =-1 is the initial
velocity. For simplicity, we choose the following numerical values of
parameters: η = 3 and ε = 1. First, we convert the given problem
to a system of first order differential equations
Of course, all these manipulations we delegate to Mathematica. To check the quality of Picard's iteration scheme, we compare the expansions obtained with correct Maclaurin expansion:
The German mathematician Carl
Gustav
Jakob Jacobi (1804--1851) came from a Jewish family but he was given the French style name Jacques Simon at birth.
His father, Simon Jacobi, was a banker and his family was prosperous. Carl was the second son of the family, the
eldest being Moritz Jacobi who eventually became a famous physicist. In 1825 Carl obtained the degree of Doctor of
Philosophy with a dissertation on the partial fraction decomposition
of rational fractions
defended before a commission led by Enno Dirksen. He followed immediately with his Habilitation
and at the same time converted to Christianity. Now qualifying for teaching University classes, the 21-year-old
Jacobi lectured in 1825/26 on the theory
of curves
and surfaces at the
University of Berlin (Prussia).
In 1827 he became a professor and in 1829, a tenured professor of mathematics at Königsberg University, and held the chair until 1842.
Jacobi was the first Jewish mathematician to be appointed professor at a German university.
Jacobi suffered a breakdown from overwork in 1843. He then visited Italy for a few months to regain his health.
On his return he moved to Berlin, where he lived as a royal pensioner until his death. One of Jacobi's greatest accomplishments was his theory of elliptic functions and their relation to the elliptic theta function.
where dots stand for derivatives with respect to t, and k denotes a positive constant
\( \le 1 . \) The parameter k is known as the modulus; its
complementary modulus is \( \kappa =
\sqrt{1-k^2} . \) The solutions to the initial value
problem above are usually called the basic Jacobi elliptic functions and denoted as
These functions were introduced by Jacobi in 1829. They are found in the description of the motion of a pendulum (see also pendulum (mathematics)), as well as in the design of the electronic elliptic filters.
If we proceed successively, we find that the coefficients of the
various powers of t ultimately become stable, i.e., they remain
unchanged in successive iterations. This leads to the representation of
solutions as convergent power series, which are usually denoted by the
symbols snt (or sn(t.m) for m=k2),
cnt, and dnt, and are called sine amplitude, cosine
amplitude, and delta amplitude, respectively. These functions were
introduced by Carl Jacobi in 1829.
Djang, G.-F., A modified method of iteration of the Picard type in the solution of differential equations, Journal of The Franklin Institute, 1948, Vol. 246, Issue 6, pp. 453--457. https://doi.org/10.1016/0016-0032(48)90261-0
Fabrey, J., Picard's theorem,
The American Mathematical Monthly, 1972,Vol. 79, No. 9, pp. 1020--1023
Robin, W.A., Solving differential equations using modified Picard iteration,
International Journal of Mathematical Education in Science and
Technology, Volume 41, 2010, Issue 5, Pages 649--665; https://doi.org/10.1080/00207391003675182
Return to Mathematica page
Return to the main page (APMA0340)
Return to the Part 1 Matrix Algebra
Return to the Part 2 Linear Systems of Ordinary Differential Equations
Return to the Part 3 Non-linear Systems of Ordinary Differential Equations
Return to the Part 4 Numerical Methods
Return to the Part 5 Fourier Series
Return to the Part 6 Partial Differential Equations
Return to the Part 7 Special Functions