Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the course APMA0330
Return to the main page for the course APMA0340
Return to Part V of the course APMA0330
where f(x, y) is a holomorphic function in two variables, and y' = dy/dx is the derivative of unknown function y(x). Recall that an infinitely differentiable function is a holomorphic function if it is locally represented by its own Taylor series (with positive radius of convergence). According to the famous Cauchy--Kovalevskaya theorem, solution to the equation \eqref{EqCK} is also an analytic function and it is unique to satisfy the initial condition y(x0) = y0.
Linear Equations
We start with a linear differential equation of the first order
Recall that the product or ratio of two holomorphic functions is again a holomorphic function subject that the denominator is never zero in some neighborhood of a fixed point.
Assuming that the leading coefficient 𝑎(x) ≠ 0 in some neighborhood of the center x0, we can divide by it to reduce the given differential equation \eqref{EqFirst.1} to the normalized form:
converge in some interval |x - x0| < r, r > 0.
Linear equations possess a remarkable property that the corresponding initial value problems have no singular solution. Also, in a domain where functions q(x) and g(x) are continuous, the corresponding IVP has a unique solution. In case of holomorphic coefficients,
a solution to Eq.\eqref{EqFirst.2} has also a power series expansion
⊛
We illustrate this approach with the following example. So we are going to find a series solution \( y(x) = \sum_{n\ge 0} x^n \) for the initial value problem
Consider an initial value problem for the first order Riccati differential equation:
\[
y'(x) = g(x) + f(y), \qquad y(x_0 ) = y_0 ,
\]
for polynomials g(x) and f(y) of degree at most two. Since a Riccati equation has no singular solution, the corresponding initial value problem alsways has a unique solution.
Here x0 and y0 are given real numbers. Recall that a function y(x) is a solution of the given differential equation if it is continuous, has a derivative, and satisfies the differential equation identically.
Since the slope function for the Riccati equation is a polynomials,
its solution is a holomorphic function in some neighborhood of the initial point where the initial condition is specified; so it is natural to seek it in teh form of a power series:
It is natural to ask whether the above series converges, in what domain, and
how to find its coefficients. It is known that a power series converges within
a circle \( \left\vert x- x_0 \right\vert < R , \) for some not negative
number R. If R = 0, the power series diverges. The radius of
convergence is the distance to the nearest singular point for sum-function.
Generally speaking, it is very hard (or even impossible) to determine the
radius of convergence because you have to determine (or estimate) all coefficients
or at least to know how they grow. Since the solution function y(x) is unknown, our task is to find a way to determine the coefficients cn of the power series, which is usually called the Taylor series. These coefficients are obtained upon calculating the derivatives:
At first glance, it seems that Taylor's series is an ideal tool for solving
initial value problems because we need to know only infinitesimal behavior of
the solution y(x) at one point---the center of the series. In
principle, we can find all derivatives of the slope function at the initial
point. However, it requires differentiation of the composition function
f(x, y(x)) with respect to independent variable x,
for which the corresponding formula is
known---it is the famous Faà di Bruno's formula. Unfortunately, the number of terms
in this formula grows exponentially (except some simple slope functions), and it becomes intractable. Another problem with Taylor's series representation is determination of its radius of convergence. Although we know that it should be a distance to a nearest singular point, but how to find this point can be very challengeable problem. Moreover, nonlinear differential equations may have not only fixed singular points but also moving singular points.
Another option to find a Taylor series representation is to substitute the power series \( y(x) = \sum_{n\ge 0} c_n \left( x - x_0 \right)^n \) into the given equation \( y' = f(x,y) \) and equate the coefficients of like power terms.
We demonstrate how the power series works for solving initial value problems in
in the lump of examples.
Example:
We start with the standard Riccati equation, which solution was found previously, in section, and perform all operations in hope that all calculations become transparent.
So we consider the initial value problem
We first explore the option of determination of Taylor's coefficients in the solution \( y(x) = \sum_{n\ge 0} c_n \, x^n . \) The initial term in its power series is obviously zero to match the initial condition y(0) = 0. The first coefficient is
\[
c_1 = y' (0) = f(0,0) = 0^2 + 0^2 =0
\]
because the slope function is \( f(x,y) = x^2 + y^2 . \) Now we calculate the second coefficient:
Since initially, the solution and its derivative are zeroes, we get c4 = 0. Next coefficients is very hard to obtain manually, and application of a computer algebra system is appreciated. However, complexity of operations grows as a show ball.
Now we explore the second option and substitute the power series
\( y(x) = \sum-{n\ge 1} c_n \,x^n \) into the differential equation. Since its first derivative is
we see that the solution blows up at
x = 1.219762884 where the denominator is zero.
It is impossible to determine this singularity from the power series
True solution along with the polynomial approximation.
Mathematica code
We first apply the Taylor series method to determine the coefficients in the expression above. Assuming that the solution can be obtained as a Taylor series
This function blows up at x = 1.416439082. So we see that solutions under different initial conditions blow up at different points. This indicates that their solutions have a moving singularity depending on the value of the initial condition.
■
Example:
We attack the same initial value problem
\( y' = 8\, x^2 + \frac{1}{2}\,y^2 , \quad y(0) =1 , \) using a different approach.
We make a Bernoulli substitution
\[
y(x) = u^r v^s
\]
and hope to be able to choose functions u(x), v(x), and parameters r, s in order to transfer the given Riccati equation into two separable equations, one for function u(x) along and another one for v(x).
with some arbitrary constants C1, C2. As usual, Jν and Yν denote the Bessel functions of the first and second kind, respectively.
We find the Taylor series solution \( v(x) = \sum_{n\ge 0} c_n x^n \) of the differential equation \( v'' + 4\,x^2 v = 0 . \) . Its substitution into the differential equation yields
where c0 = v(0) and c1 = v'(0) are the initial values that play the role of arbitrary constants.
⊛
Although we found the general solution as a ratio of two power series, it is far away from the required series representation because we need identify the values of the constant k = c1/c0 and the divide the two power series. Instead, we use another approach and introduce the function
with first five correct terms. Note that the solution to the given initial value problem has a singularity at x = 1.219762884. Therefore, we expect that ϕ2(x) gives better approximation.
Third estimate: With \( \displaystyle p = -x \left( -1 + \frac{x}{2} + \frac{8}{3}\, x^3 - 2\,x^4 + \frac{2}{5}\, x^5 \right) , \) we integrate again and obtain
Its solution exists in the interval (-∞, 4/e) ≈ (-∞, 1.47152)
because it is the implicit solution of the equation
\( y\,\ln |y| + x/4 = 0 , \) which is expressed through the
Lambert function. It has the following power series expansion:
This problem can be solved in slightly different way. First, we define
the differential operator (non-linear because the given equation is
non-linear)
L[x_, y_] = y'[x] - x - (y[x])^3
Out[1]= -x - y[x]^3 + Derivative[1][y][x]
Then we set n to be 13, the number of terms in the power series:
n = 13
Out[2]= 13
The next cell says to create a sum of n terms and effectively turn it into a series saying that the terms beyond n are indefinite. O[x]^(n+1) indicates that we know nothing about terms of
order n+1 and beyond. So we define the (finite) series
Substitute this expression into the differential operator
LHS = Collect[L[x, y], x] + O[x]^n
Equate the coefficients to 0. Recall that the operator && means
And.
sys = LogicalExpand[LHS == 0]
Find the coefficients a[i] in the power series of y[x] using
Reverse[Table[a[i],{i,0,n}]].
The latter makes the form of the
coefficients agree with what you would find by hand: the next term is
expressed through the previous terms.
As you see, in a blank of eye, Mathematica provided you ten terms Maclaurin series for the second order differential equation subject to appropriate initial conditions. However, we present also a hard way to find such series in a sequence of Mathematica commands so the reader will learn more about this CAS.
For simplicity, set "a" equals to desired x0 value. It is assumed that power series solution in x-x0 has a positive radius of convergence.
Set n equals to the highest power term desired in the power series
Set yInitial equals to the value of y when x equals x0.
c[1] represents the coefficient in front of the x¹ term
c[2] represents the coefficient in front of the x² term
etc.
Note: In order to solve for instances when x = x0 does not equal 0, I suggest you reduce the n term to something below 5 in order to prevent a significantly long run time.
In[1]:= Clear[yInitial];
Clear[f];
Clear[c];
We demonstrate application of Mathematica in a simple Riccati equation \( y' = x^2 + y^2 . \)
Now set up our Taylor series as a symbolic expansion using derivatives of `y` evaluated at the origin. I use an order of 15 but that is something one would probably make as an argument to a function, if automating is needed.
yy = Series[y[x], {x, 0, 15}];
Next apply the differential operator and add the initial conditions. Then find a solution that makes all powers of x vanish.
Polynomial approximation (in red) and the actual solution (in blue).
Mathematica code
Taylor Series or Differential Transform Algorithm
The Taylor series method introduces the solution by an infinite series
\( y(x) = \sum_{n\ge 0} c_n \left( x - x_0 \right)^n . \) The coefficients \( c_n = \frac{y^{(n)} (x_0 )}{n!} \) are expressed through the derivatives of the unknown function evaluated at the center that are in turn are determined recursively from the given initial value problem. The initial term follows from the initial condition c0 = y0.
The first coefficient can also be found easily:
Practical implementation of this formula may be a challengeable problem even for a simple slope function such as a polynomial because it requires application of the chain rule. It becomes intractable in the general case because the number of terms grows exponential (see Faà di Bruno's formula). Therefore, this algorithm is used mostly to find a polynomial approximation by utilizing a computer algebra system, such as Mathematica that can help a lot with manipulations and simplifications, at least for first few terms.
Frobenius method
Although the method of solving linear diffeential equations with regular singular points was developed by L. Fuchs in 1866, G. Frobenius presented its simplification six years later. It is a custom to name it after the latter author.
The Frobenius method is mostly effectively applicable in linear differential equations and some in limited classes of nonlinear equations for which the slope functions admits a power series expansion. For example when slope function contains a square or reciprocal of the unknown function. To demonstrate an application of the Frobenius method, we consider a simple initial value problem
\[
y' = y^2 , \qquad y(0) =1,
\]
for which the explicit solution is known: y(x) = (1-x)-1. Substituting the power series \( y(x) = 1 + \sum_{n\ge 1} c_n x^n \) into the given differential equation and using the convolution formula, we obtain
In all problems, y' denotes the derivative of function y(x) with respect to independent variable x and \( \dot{y} = {\text d} y/{\text d}t \) denotes the derivative with respect to time variable.
Using ADM, solve the initial value problem: \( \left( 1 - xy \right) y' = y^2 , \qquad y(0) =1, \qquad (xy \ne 1) . \)
Using ADM, solve the initial value problem: \( \left( x - y \right) y' = y , \qquad y(0) =1 , \qquad (y \ne x) . \)
Fu, W.B., A comparison of numerical and analytical methods for the solution of a Riccati equation, International Journal of Mathematical Education in Science and Technology, `989, Vol. 20, No. 3, pp. 421--327.
Grigorieva, E., Methods of Solving Sequence and Series Problems, Birkhäuser; 1st ed. 2016.
Return to Mathematica page
Return to the main page (APMA0330)
Return to the Part 1 (Plotting)
Return to the Part 2 (First Order ODEs)
Return to the Part 3 (Numerical Methods)
Return to the Part 4 (Second and Higher Order ODEs)
Return to the Part 5 (Series and Recurrences)
Return to the Part 6 (Laplace Transform)
Return to the Part 7 (Boundary Value Problems)