Preface


This section addresses the problem of determination whether the initial value problem has no singular solution.

Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the course APMA0330
Return to the main page for the course APMA0340

Uniqueness of solutions of ODEs


We start with autonomous equations.

Theorem: Consider the first order differential equation \( {\text d}y/{\text d}x = f(y) , \) where f is continuous in a neighborhood of y0. Assume that the solutions through y0 are not unique, that is, assume that there are two solutions φ1 and φ2 such that φ1(0) = φ2(0) = y0 and that φ1 and φ2 differ in every neighborhood of 0. Then f(y0) = 0.

 

Theorem: Let f(y) be a continuous function on the closed interval [a,b] that has one null \( y^{\ast} \in (a,b) , \) namely, \( f(y^{\ast} ) =0 \) and \( f(y) \ne 0 \) for all other points \( y \in (a,b) . \) If the integral
\[ %\begin{equation} \label{Equnique.1} \int_y^{y^{\ast}} \frac{{\text d}y}{f(y)} \tag{1} %\end{equation} \]
diverges, then the initial value problem for the autonomous differential equation
\[ y' = f(y) , \qquad y(x_0 ) = y^{\ast} \]
has the unique solution \( y (x) \equiv y^{\ast} . \) If the integral converges, then the initial value problem has multiple solutions.    ⧫

 

Theorem: Suppose that f(x,y) is uniformly Lipschitz continuous in y (meaning the Lipschitz constant L in the inequality \( |f(x,y_1 ) - f(x, y_2 )| \le L\,|y_1 - y_2 | \) can be taken independent of x) and continuous in x. Then, for some positive value δ there exists a unique solution \( y = \phi (x) \) to the initial value problem
\[ %\begin{equation} \label{Equnique.2} y' = f(x,y) , \qquad y(x_0 ) = y_0 \tag{2} %\end{equation} \]
on the interval \( \left[ x_0 -\delta , x_0 + \delta \right] . \)    ⧫
Proof: Suppose the the initial value problem \( y' = f(x,y), \quad y(x_0 ) = y_0 , \) with Lipschitz slope function f(x,y) has two solutions y1 and y2. Let
\[ u(x) = \int_{x_0}^x \left\vert y_1 (s) - y_2 (s) \right\vert {\text d} s \ge 0 \quad \mbox{for} \quad x\ge x_0 . \]
Since these functions are solutions of the same differential equation, they also satisfy the equivalent integral equation
\[ y(x) = y_0 + \int_{x_0}^x f(s, y(s))\,{\text d}s . \]
Then we have
\[ \left\vert y_1 (x) - y_2 (x) \right\vert \le L \int_{x_0}^x \left\vert y_1 (s) - y_2 (s) \right\vert {\text d}s , \]
which we rewrite as \( u' (x) \le L\, u(x), \) or equivalently \( u' (x) - L\, u(x) \le 0. \) Using Gronwall inequality, we rewrite it as
\[ \left[ u' (x) - L\, u(x) \right] e^{-L |x - x_0 |} \le 0. \]
However, the last inequality says that
\[ \frac{\text d}{{\text d}x} \left[ u(x) \,e^{-L |x - x_0 |} \right] \le 0 \qquad\mbox{for} \quad x\ge x_0 . \]
Integrating with respect to x yields
\[ u(x) \,e^{-L |x - x_0 |} - u(x_0 ) = \int_{x_0}^x \frac{\text d}{{\text d}s} \left[ u(s) \,e^{-L |s - x_0 |} \right] {\text d}s \le 0 . \]
Using u(x0) = 0, we get
\[ u(x) \,e^{-L |x - x_0 |} \le 0 \qquad\mbox{for} \quad x\ge x_0 . \]
Therefore \( u(x) \le 0 , \) which leads to conclusion that \( u(x) \equiv 0 \) for all xx0.

A similar argument shows also that \( u(x) \equiv 0 \) for all xx0.    ⧫

Theorem: The initial value problem \[ {\text d}y/{\text d}x = f(x,y). \qquad y\left( x_0 \right) = y_0 \tag{2} \] has a unique solution on some interval containing x0 if the slope function f(x, y) is continuous in both x and y on some rectangle R centered at (x0, y0) and nonincreasing in y for each fixed xR.    ⧫
Theorem: Let R be the region defined by the inequalities \( 0 \le x- x_0 < a, \ |s_k - y_k | < b_k , \quad k=0,1,2,\ldots , n-1 , \) where yk ≥ 0 for k > 0. Suppose the function \( f(x, s_0 , s_1 , \ldots , s_{n-1} ) \) in the initial value problem
\[ %\begin{equation} \label{Equnique.3} y^{(n)} = f\left( x,y,y' , \ldots , y^{(n-1)} \right) , \qquad y^{(k)} (0) = y_k , \ k=0,1,\ldots , n-1; \tag{3} %\end{equation} \]
is nonnegative, continuous, and nondecreasing in x, and continuous and nondecreasing in sk for each k = 0,1,...,n-1 in the region R. If in addition,
\[ f\left( x, y_0 , y_1 , \ldots , y_{n-1} \right) \ne 0 \qquad \mbox{in $R$ for} \quad x > x_0 , \]
then the above initial value problem has at most one solution in R.    ⧫
Example: Consider the initial value problem
\[ y' = 2\,\sqrt{y}, \quad y(0)=0 , \]
where the slope function \( f(y) = 2\,\sqrt{y} \) is continuous on infinite interval \( [0, \infty ) \) but not Lipschitz. So according to Peano's theorem, this initial value problem has a solution. Indeed, we can apply Picard's iteration procedure to obtain a solution \( y(x) \equiv 0 . \) On the other hand, since the given differential equation is autonomous, we can separate variables and integrate:
\[ \frac{{\text d}y}{2\,\sqrt{y}} = {\text d} x \qquad \Longrightarrow \qquad \sqrt{y} = x-C , \]
where C is an arbitrary constant. Since we consider only positive branch of the square root function, the above formula is valid only when \( x \ge C . \) Therefore, we get a family of solutions (which is also called the general solution) depending on a parameter C:
\[ y = \begin{cases} \left( x-C \right)^2 , & \qquad x \ge C , \\ 0 , & \quad \mbox{for } x <C . \end{cases} \]
Using Mathematica, we plot some solutions
         Singular solutions.
   
q[x_, CC_] = Piecewise[{{(x - CC)^2, x >= CC}, {0, x < CC}}];
q2 = Plot[y = 0, {x, -3.5, 3.5}, PlotStyle -> {Thick, Black}] (* singular solution *)
graph4[CC_] :=
Module[{}, Plot[Evaluate[q[x, CC]], {x, -3.5, 3.5}, AxesLabel -> {x, y},
PlotRange -> {{-3.5, 3.5}, {-0.5, 6}}, AspectRatio -> 1, DisplayFunction -> Identity,
PlotStyle -> RGBColor[1, 0, 0]]]
initlist = {0, 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, -1, -2, -3, -4};
Module[{i, newgraph}, graphlist = {}; Do[CC = initlist[[i]];
newgraph = graph4[CC];
graphlist = Append[graphlist, newgraph], {i, 1, Length[initlist]}]]
solgraph =
Show[q2, graphlist, {PlotStyle -> {Black, Thick}, {DisplayFunction -> $DisplayFunction}}]

In this sequence of command, I am first entering the family of solutions to the differential equation. Since using C is prohibited in Mathematica, we use CC instead. Then we use two subroutines, one for plotting solutions, and another one for looping with respect to constant C, Finally, we display all graphs.

We can also check that the given initial value problem has multiple solutions by evaluating integral

\[ \int_y^0 \frac{{\text d}y}{2\,\sqrt{y}} = \sqrt{y} , \]
which converges.

On the other hand, the initial value problem

\[ y' = 4x\,\sqrt{y}, \quad y(0)=0 , \]
has at least two solutions, y = 0 and y = x4. Actually, the IVP has infinitely many solutions
\[ y = \begin{cases} \left( x - a \right)^4 , & \ \mbox{ for} \quad x \ge a , \\ 0, & \ \mbox{ for} \quad -b \le x \le a , \\ -\left( x - b \right)^4 , & \ \mbox{ for} \quad x \le b. \end{cases} \]
This function is everywhere continuous, differentiable, and satisfies all conditions of the given initial value problem.

The initial value problem

\[ y' = y^{3/2}, \quad y(0)=0 , \]
has infinite many solutions
\[ y = \begin{cases} \frac{1}{27}\left( x - a \right)^3 , & \ \mbox{ for} \quad x \ge a , \\ 0, & \ \mbox{ for} \quad -b \le x \le a , \\ \frac{1}{27} \left( x - b \right)^3 , & \ \mbox{ for} \quad x \le b. \end{cases} \]
in addition to the trivial solution.

A family of functions

\[ y = c^2 - \sqrt{x^4 + c^2} \]
is a solutions of the initial value problem
\[ y' = \begin{cases} \frac{4x^3 y}{x^4 + y^2} , & \ \mbox{ if} \quad x \ne 0 , \\ 0, & \ \mbox{ for} \quad x=0, \ y=0, \end{cases} \]
for any constant c. The slope function does not satisfy the Lipschitz condition in a neighborhood of the origin.    ■
Example: The initial value problem
\[ y' = 3x\,y^{1/3}, \quad y(0)= a , \]
has a unique solution whenever 𝑎 ≠ 0. Indeed, the partial derivative \( \partial f/\partial y = x\,y^{-2/3} \) is continuous at all points in the xy-plane but not on the x-axis (y = 0). When 𝑎 ≠ 0, we can certainly draw a rectangle containing (0, 𝑎) that does not intersect the x-axis. In any such rectangle the hypotheses of the existence and uniqueness theorem are satisfied, and therefore the initial-value problem does indeed have a unique solution.

However, when 𝑎 = 0, the conditions of the uniqueness theorem do not hold. Since this theorem provides only sufficient conditions, we need to verify directly that the given problem has at least two solutions. One of them is the trivial one y = 0 and another solution is y = x³.    ▣

Example: Consider the initial value problem
\[ y^{(n)} = \left( y^{(n-1)} \right)^{2/3} , \qquad y^{(k)} (0) = 0, \quad k=0,1,\ldots , n-1 . \]
It has a trivial solution y(x) = 0 and another one
\[ y(x) = \frac{2\, x^{n+2}}{9 \left( n+2 \right)} ; \]
which can be obtained by substitution u = y(n-1).   ■
Example: Consider the initial value problem
\[ y^{(n)} = \left( y^{(n-1)} \right)^{2/3} \exp \left\{ y^{(n-1)} \right\} , \qquad y^{(k)} (0) = 0, \quad k=0,1,\ldots , n-1 . \]
It has again a trivial solution y(x) = 0. Let \( u = y^{(n-1)} , \) then u(x) is defined from the equation
\[ x = \int_0^u t^{-2/3} e^{-t}\,{\text d}t . \]
As \( u \to \infty , \quad x\to r_0 = \int_0^{\infty} t^{-2/3} e^{-t}\,{\text d}t . \) Then another solution is given by
\[ y(x) = \int_0^x \frac{\left( x - t \right)^{n-2}}{(n-2)!} \, u(t)\,{\text d}t , \]
defined on 0 ≤ x < r0 < ∞.

The function y(x) can be multiplied by an arbitrary continuous, nonnegative, nondecreasing function of x to give another example.   ■

It should be noted that the Lipschitz condition is a sufficient one but not a necessary condition for uniqueness. An example where there exists unique solution in spite of the Lipschitz condition not being satisfied is given by the initial value problem

\[ y' = 1 + (4/3) \,y^{1/3} , \qquad y(0) = 0. \]
It is obvious that the slope function violates the Lipschitz condition in a domain which includes the origin. However the above initial value problem has a unique solution.
Example: Following Dhar, we consider one dimensional motion modeled by Newton's equation
\[ \ddot{x} = f(x) = - \frac{{\text d}\Pi}{{\text d}x} , \]
where \( \ddot{x} = {\text d}^2 x/{\text d}xt^2 \) is acceleration of a unit point mass. It is always assumed that functions Π(x) and f(x) = - dΠ(x)/dx are continuous functions of x.

We consider the case when the potential function Π(x) has a single singularity at the origin in the sense that \( V'' (x) \) or one of the higher derivatives does not exist. So we consider the initial value problem

\[ \ddot{x} = f(x) = - \frac{{\text d}\Pi}{{\text d}x} , \qquad x(0) =0, \quad \dot{x}(0) = v_0 . \]
Multiplying both sides of the differential equation by the velocity \( \dot{x} \) and integrating, we obtain
\[ \frac{\text d}{{\text d}t} \left( \frac{1}{2} \,\dot{x}^2 + \Pi (x) \right) = 0 \qquad \Longrightarrow \qquad \frac{{\text d}x}{{\text d}t} = \pm \sqrt{2 \left( E - \Pi (x) \right)} = g(x) , \]
where \( E = v_0^2 /2 \) is the total energy of the system and is a constant because Π(0) = 0. The sign of the square roots depended on the sign of the initial velocity v0.

The first order differential equation \( \dot{x} = g(x) \) is equivalent to Newton's equation \( \ddot{x} = f(x) \) only if the velocity is not zero because we multiplied by \( \dot{x} . \) Therefore, we need to consider two cases depending on whether the initial velocity is zero or not.

  1. v0 ≠ 0. Then E0 ≠ 0 and the reciprocal 1/g(x) is finite and continuous in an interval containing the origin. Therefore, the first order differential equation \( \dot{x} = g(x) \) can be integrated to give the unique solution \( t = \int_0^x {\text d}\xi/g(\xi ) . \) So the second order equation of motion \( \ddot{x} = f(x) \) will have a unique solution irrespective of the type of singularity that Π(x) may have.
  2. v0 = 0, then E0 = E = 0 and the first order equation becomes
    \[ \frac{{\text d}x}{{\text d}t} = \left[ - 2\,\Pi (x) \right]^{1/2} . \]
    It has a trivial solution x(t) ≡ 0 for all t, and two more solutions, which will exist provided the improper integrals
    \[ t = J_{\pm} (x) \equiv \int_0^{\pm x} \frac{{\text d}\xi}{\left[ - 2\, \Pi (\xi ) \right]^{1/2}} \]
    are real and finite. When these integrals exist, the second order differential equation of motion will have additional solution provided that f(0) = 0.

Now we formulate the necessary and sufficient conditions for the existence of unique solutions to the IVP for Newton's equation of motion:   ■

Review of Definition of Various Means.

For a given finite real number α , the α-th - power mean mα of positive scalars 𝑎 and b, is defined as

\[ m_{\alpha} = \left( \frac{a^{\alpha} + b^{\alpha}}{2} \right)^{1/\alpha} . \]
  1. When α = -1,
    \begin{equation} m_{-1} = \frac{2\,a\,b}{a+b} \qquad (\mbox{Harmonic mean}), \label{harmonic} \end{equation}
  2. when α = ½,
    \begin{equation} m_{1/2} = \left( \frac{\sqrt{a} + \sqrt{b}}{2} \right)^2 . \end{equation}
  3. when α = 1,
    \begin{equation} m_{1} = \frac{a+b}{2} \qquad (\mbox{Arithmetic mean}), \label{arithmetic} \end{equation}
  4. when α → 0,
    \begin{equation} m_{0} = \lim_{\alpha \to 0} m_{\alpha} = \sqrt{a\cdot b} \qquad (\mbox{Geometric mean}), \label{geometric} \end{equation}
  5. Heronian mean:
    \begin{equation} m = \frac{a + \sqrt{a\,b} + b}{3} \qquad (\mbox{Heronian mean}) . \label{heronian} \end{equation}
  6. Logarithmic mean:
    \begin{equation} L = \frac{a - b}{\ln a - \ln b} \qquad (\mbox{Logarithmic mean}) . \label{logarithmic} \end{equation}
  7. Contra-harmonic mean:
    \begin{equation} L = \frac{a^2 + b^2}{a + b} \qquad (\mbox{Contra-harmonic mean}) . \label{contraharmonic} \end{equation}
  8. Centroidal mean:
    \begin{equation} L = \frac{2 \left( a^2 + ab + b^2 \right)}{3 \left( a + b \right)} \qquad (\mbox{Centroidal mean}) . \label{centroidal} \end{equation}
  1. Dhar, A., Nonuniqueness in the solutions of Newton's equation of motion, American Journal of Physics, 1993, Vol. 61, No. 1, pp. 58--61; doi: 10.1119/1.17411
  2. Hales, A.W. and Sells, G.R., Multiple solutions of a differential equation, The American Mathematical Monthly, 1966, Vol. 73, No. 6, pp.672--673.
  3. Petrovski, I.G., Ordinary Differential Equations, Dover, NY, 1973.
  4. Wend, D.V.V., Uniqueness of solutions of ordinary differential equations, The American Mathematical Monthly, 1967, Vol. 74, No. 8, pp. 948--950.

 

Return to Mathematica page
Return to the main page (APMA0330)
Return to the Part 1 (Plotting)
Return to the Part 2 (First Order ODEs)
Return to the Part 3 (Numerical Methods)
Return to the Part 4 (Second and Higher Order ODEs)
Return to the Part 5 (Series and Recurrences)
Return to the Part 6 (Laplace Transform)
Return to the Part 7 (Boundary Value Problems)