Preface


A scalar Riccati differential equation is an equation of the form \( y' + p(x)\,y = g(x)\,y^{2} + f(x) , \quad \) where p, g, and f are given continuous on some finite interval real-valued functions, and g(g) ≠ 0. The Riccati equation is used in different areas of physics, engineering, and mathematics such as quantum mechanics, thermodynamics, and control theory. However, most applications use vector Riccati differential equation with some matrices instead of mentioned above functions---the corresponding dynamic systems are discussed in the second course APMA0340.

There are numerous online video tutorials covering this subject, one of the best is at this link. They describe a sequence of steps which convert the Ricatti equation to either a Bernoulli equation or a linear differential equation. These involve knowing in advance a particular solution, applying non-negative constraints judiciously to certain terms and then careful substitutions of identities obtained from the specific solution to reach the general solution.

Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to Mathematica tutorial for the third course APMA0360
Return to the main page for the course APMA0330
Return to the main page for the course APMA0340
Return to the main page for the course APMA0360
Return to Part II of the course APMA0330

Riccati Equations


Jacopo Riccati

Jacopo Francesco Riccati (1676--1754) was an Venetian mathematician and jurist from Venice. In 1724, he considered the particular differential equation

\[ u' (x) = n\,x^{m+n-1} - x^{-n} u^2 (x) , \]
where m and n are real constants. Later, this equation was generalized to the first order non-linear equation, which now bears Riccati's name:

\begin{equation} y' + p(x)\,y = g(x)\,y^2 + f(x) , \label{EqRiccati.1} \end{equation}
where p, g, and f are some real-valued given functions on finite interval [𝑎, b]. According to G.N. Watson, the designation of Eq.(1) as “Riccati’s equation” was made by d’Alembert in 1768 (Histoire de l'Académie royale des sciences et des Belles Lettres de Berlin, XIX, 1768; published 1770, page 242). Actually, differential equation (1) was known to mathematical community somewhat earlier, primarily to Daniel Bernoulli (1700--1782). In fact, Riccarti refused to dispute his priority of discovering equation (1), saying that he had no intention to challenge the Bernoulli family.

Jacopo Riccati was educated first at the Jesuit school for the nobility in Brescia, and in 1693 he entered the University of Padua to study law. He received a doctorate in law in 1696. Encouraged by Stefano degli Angeli to pursue mathematics, he studied mathematical analysis. Riccati received various academic offers, but declined them in order to devote his full attention to the study of mathematical analysis on his own. Peter the Great invited him to Russia as president of the St. Petersburg Academy of Sciences. He was also invited to Vienna as an imperial councilor and was offered a professorship at the University of Padua. He declined all these offers. He was often consulted by the Senate of Venice on the construction of canals and dikes along rivers.

When f(x) = 0, we get a Bernoulli equation. If g(x) = 0, the equation becomes a first order linear ordinary differential equation. Therefore, we exclude these trivial cases from our consideration. It is well known that solutions to the general Riccati equation are not available; therefore, it is desirable to simplify the Riccati equation by substitution. The first observation given below shows that the coefficient g(x) can always be dropped by considering it as 1 subject to g(x) ≠ 0.

Lemma 1: Upon substitution y = u/g into the Riccati equation y′ + p(x)y = g(x)y² + f(x), g(x) ≠ 0 on some interval, the new dependent variable u becomes a solution of the following Riccati equation \[ u' + \left( p - \frac{g'}{g} \right) u = u^2 + g\,f , \qquad g(x) \ne 0 . \]
   
Substitution of y = u/g into the Riccatu equation yield \[ \frac{1}{g}\, u' - \frac{g'}{g^2}\, u + \frac{p}{g}\, u = g \left( \frac{1}{g}\,u \right)^2 + f . \] Multiplying both sides of the latter, we obtain \[ u' - \frac{g'}{g}\, u + p\, u = u^2 + g(x)\,f(x) . \]
On the other hand, the linear term can be dropped by making the coefficient p(x) = 0. This can be achieved when g(x) ≠ 0 by substitution
\[ y = w\,\exp \left\{ \int p(x)\,{\text d} x \right\} . \]
Under this substitution, the function w(x) satisfies
\begin{equation} \label{EqRiccati.2} w' = A(x) + B(x)\,w^2 , \end{equation}
where
\[ A(x) = f(x)\,\exp \left\{ \int p(x)\,{\text d} x \right\} , \quad B(x) = - g(x) \,\exp \left\{ -\int p(x)\,{\text d} x \right\} . \]
Putting the above in Wolfram code, first define the functions p, g, f as functions of x
p[x_] := "p(x)"
g[x_] := "g(x)"
f[x_] := "f(x)"
Define the substitution for y
y[x_] := w[x] Exp[Integrate[p[x], x]]
Define A(x) and B(x)
a[x_] := f[x] Exp[Integrate[p[x], x]]
b[x_] := -g[x] Exp[-Integrate[p[x], x]]
New Riccati equation for w
wPrime = a[x] + b[x] w[x]^2;
Display the results
{y[x], a[x], b[x], wPrime};
TableForm[FullSimplify[%]]
Verifying the above formulas in Wolfram code upon choosing some functions:
Clear[p, g, f, y, w, x]
p[x_] := x
g[x_] := Exp[x]
f[x_] := x
Substitution
y[x_] = w[x] Exp[Integrate[p[x], x]]
E^(x^2/2) w[x]
Coefficients A and B
a[x_] := f[x] Exp[Integrate[p[x], x]]
b[x_] := -g[x] Exp[-Integrate[p[x], x]]
Reduction to normal form
A[x_] := f[x]*g[x] - (p[x]^2/4) - (D[p[x], x]/ 2) - (3/4) (D[g[x], x]/g[x])^2 + (p[x] D[g[x], x])/( 2 g[x]) + (1/2) D[g[x], {x, 2}]/g[x]
Normal form of the Riccati equation
normalForm = D[w[x], x] == A[x] + w[x]^2
Derivative[1][w][x] == -(3/4) + x/2 + E^x x - x^2/4 + w[x]^2
FullSimplify[%]
3 + x (-2 - 4 E^x + x) + 4 Derivative[1][w][x] == 4 w[x]^2
The Riccati equation \eqref{EqRiccati.1} can be reduced to the normal form with another substitution
\[ y = \frac{1}{g(x)}\, w + \frac{p(x)}{2 g(x)} - \frac{g' (x)}{2\,g^2 (x)} , \quad g(x) \ne 0 . \]
Define the functions p, g, f as functions of x
p[x_] := "p(x)" g[x_] := "g(x)"
Define the substitution for y in terms of w
y[x_] := 1/(g[x] w[x]) + p[x]/(2 g[x]) - D[g[x], x]/(2 g[x]^2)
Display the resulting expression for y
y[x]
("p(x)")/(2 "g(x)") + 1/("g(x)" w[x])
Then new unknown function w(x) satisfies the Riccati equation in normal form
\begin{equation} \label{EqRiccati.3n} w' = A(x) + w^2 , \end{equation}
where
\[ A(x) = f\,g - \frac{p^2}{4} - \frac{p'}{2} - \frac{3}{4} \left( \frac{g'}{g} \right)^2 + \frac{p\,g'}{2\,g} + \frac{1}{2}\,\frac{g''}{g} . \]
Therefore, every Riccati equation can be reduced to the equation in normal form (3) with coefficient B(x) = 1 of the square term. There are known many other transformations that reduce the Riccati equation to specific form (3); consult the text by Polyanin & Zaitsev.

Continuing to use Wolfram code to verify the above; we first define the functions p, g, f, and their derivatives as functions of x:

p[x_] := "p(x)" g[x_] := "g(x)" f[x_] := "f(x)" pPrime[x_] := D[p[x], x] gPrime[x_] := D[g[x], x] gDoublePrime[x_] := D[g[x], {x, 2}]
Define A(x) according to the given formula
a[x_] := f[x] g[x] - p[x]^2/4 - pPrime[x]/2 - 3/4 (gPrime[x]/g[x] )^2 + (p[x] gPrime[x]/(2 g[x])) + (gDoublePrime[ x]/(2 g[x]))
Define the new Riccati equation for w in normal form
wPrime = a[x] + w[x]^2;
Display the resulting expression for A (x) and the Riccati equation in normal form
TableForm[{a[x], wPrime}]
   
Example 1: We consider the Riccati equation \[ y' + y = e^x y^2 + 4\,e^{-x} . \tag{1.1} \] The substitution \[ y = e^{-x} u \qquad \Longrightarrow \qquad u' = u^2 +4 \] reduces this equation (1.1) to a separable equation \[ \frac{{\text d}u}{u^2 + 4} = {\text d}x \qquad \Longrightarrow \qquad \int \frac{{\text d}u}{u^2 + 4} = \frac{1}{2}\,\arctan \left( \frac{u}{2} \right) = x + C . \]
Integrate[1/(u^2 + 4), u]
1/2 ArcTan[u/2]
Hence, the general solution of the Riccati equation (1.1) becomes \[ \arctan \left( \frac{e^x y}{2} \right) = 2x + C . \]    ■
End of Example 1

One of the strengths of the Riccati equation is that it is a unifying link between linear differential equations and non-linear equations. As so, the Riccati equation has much in common with linear equations; for example, it has no singular solution. Except special cases, the Riccati equation cannot be solved analytically using elementary functions or quadratures, and the most common way to obtain its solution is to represent it in series or apply a numerical solver. Moreover, the Riccati equation can be reduced to the second order linear differential equation by Euler's substitution

\begin{equation} \label{EqRiccati.3} y(x) = - \frac{u'}{g(x)\,u(x)} , \qquad u(x) \ne 0 . \end{equation}
Then function u(x) satisfies the second order linear differential equation:
\begin{equation} \label{EqRiccati.4} u'' + a(x)\, u' (x) + b(x)\, u =0, \end{equation}
where
\[ a(x) = \frac{g' (x)}{g(x)} - p(x) , \quad b(x) = g(x)\,f(x) . \]
Similarly, the transformation
\[ u(x) = \exp \left\{ - \int y(x)\,g(x)\,{\text d} x \right\} \]
reduced the Riccati equation (1) into the second order linear differential equation
\[ u''(x) + \left( p(x) - \frac{g' (x)}{g(x)} \right) u' (x) + f(x)\,g(x)\, u(x) = 0 . \]
riccatiEq = y'[x] == f[x] + g[x] y[x] + y[x]^2
Derivative[1][w][x]/("g(x)" w[x]^2) == "f(x)" + "g(x)" (("p(x)")/(2 "g(x)") + 1/("g(x)" w[x])) + (("p(x)")/( 2 "g(x)") + 1/("g(x)" w[x]))^2
eulerSubstitution = y[x] -> -u'[x]/(g[x] u[x])
("p(x)")/(2 "g(x)") + 1/("g(x)" w[x]) -> -(Derivative[1][u][x]/( "g(x)" u[x]))
a[x_] := D[g[x], x]/g[x] - p[x] b[x_] := g[x] f[x] linearEq = u''[x] + a[x] u'[x] + b[x] u[x] == 0
"f(x)" "g(x)" u[x] - "p(x)" Derivative[1][u][x] + (u^\[Prime]\[Prime])[x] == 0
transformation = u[x] -> Exp[-Integrate[y[x] g[x], x]]
u[x] -> E^(-"g(x)" \[Integral](("p(x)")/(2 "g(x)") + 1/( "g(x)" w[x])) \[DifferentialD]x)
transformedEq = u''[x] + (p[x] - D[g[x], x]/g[x]) u'[x] + f[x] g[x] u[x] == 0
"f(x)" "g(x)" u[x] + "p(x)" Derivative[1][u][x] + (u^\[Prime]\[Prime])[x] == 0
   
Example 2: We consider the Riccati equation \[ y' + 3\,x^2 y(x) = x\, y^2 + \sinh x , \tag{2.1} \] where y′ = dy/dx is the derivative of unknown function y(x). Application of Euler's substitution \( \displaystyle \quad y = - u'/(x\,u) \ \) yields \[ - \frac{u''}{x\,u} + \frac{u'}{x^2 u} + \frac{(u' )^2}{x\,u^2} - 3x\,\frac{u'}{u} = x \left( \frac{u'}{x\,u} \right)^2 + \sinh x . \] Canceling square terms, we simplify this equation as \[ - \frac{u''}{x\,u} + \frac{u'}{x^2 u} - 3x\,\frac{u'}{u} = \sinh x \] or upon multiplication by x u, we get a linear equation \[ - u'' + u' \left( \frac{1}{x} - 3x^2 \right) = x\,u\,\sinh x . \tag{2.2} \]

We can make another substitution \[ w = \exp \left\{ - \int x\,y(x)\,{\text d} x \right\} . \] Now we calculate the expression with some coefficients 𝑎 and b: \[ w'' + a\,w' + b\,w = \left\{ -y -x\,y' + x^2 y^2 -a\,x\,y + b \right\} \exp \left\{ - \int x\,y(x)\,{\text d} x \right\} . \] Assuming that y(x) is a solution of the Riccati equation (2.1), we get \[ \exp \left\{ \int x\,y(x)\,{\text d} x \right\} \left[ w'' + a\,w' + b\,w \right] = -y -x \left( -3\,x^2 y + x\,y^2 + \sinh x \right) + x^2 y^2 -a\,x\,y + b \] As you can see, the square term of y² is can celled out and we simplify \[ \exp \left\{ \int x\,y(x)\,{\text d} x \right\} \left[ w'' + a\,w' + b\,w \right] = x\,y \left[ 3\,x^2 y - \frac{1}{x} - a \right] -x\,\sinh x + b . \] This expression vanishes if \[ a = 3\,x^2 y - \frac{1}{x} \qquad \mbox{and} \qquad b = x\,\sinh x . \]

Clear[a, b, g, p, u, x, y, w];
riccatiEq = y'[x] + 3 x^2 y[x] == x^2 y[x]^2 + Sinh[x]
3 x^2 y[x] + Derivative[1][y][x] == Sinh[x] + x^2 y[x]^2
eulerSubstitution = y[x] -> -u'[x]/(x u[x])
y[x] -> -(Derivative[1][u][x]/(x u[x]))
simplifiedEq = riccatiEq /. eulerSubstitution // Simplify
Derivative[1][y][x] == Sinh[x] + (3 x Derivative[1][u][x])/u[x] + Derivative[1][u][x]^2/ u[x]^2
linearEq = -u''[x] + u'[x] (1/x - 3 x^2) == x u[x] Sinh[x]
(1/x - 3 x^2) Derivative[1][u][x] - (u^\[Prime]\[Prime])[x] == x Sinh[x] u[x]
w[x_] := Exp[-Integrate[x y[x], x]]
a[x_] := 3 x^2 y[x] - 1/x
b[x_] := x Sinh[x]
linearDEq = w''[x] + a[x] w'[x] + b[x] w[x] == 0; FullSimplify[%]
E^-\[Integral]x y[x] \[DifferentialD]x x (-Sinh[x] + x (-1 + 3 x) y[x]^2 + Derivative[1][y][x]) == 0
Then function w(x) will be a solution of the linear differential equation \( \displaystyle \quad w'' + a\,w' + b\,w = 0 , \ \) which is exactly the same as Eq.(2.2).    ■
End of Example 2
Conversely, every linear homogeneous differential equation with variable coefficients \( u'' + a\,u' + b\,u = 0 \) can be reduced to the Riccati equation \eqref{EqRiccati.1} by substitution
\[ u(x) = \exp \left\{ \int y(x)\,{\text d} x \right\} . \]
Indeed, substituting its derivatives
\[ u'(x) = y(x)\,\exp \left\{ \int y(x)\,{\text d} x \right\} , \quad u''(x) = y(x)^2 \,\exp \left\{ \int y(x)\,{\text d} x \right\} + y'(x)\,\exp \left\{ \int y(x)\,{\text d} x \right\} , \]
into the linear differential equation \( u'' + a\,u' + b\,u = 0 , \ \) we obtain (upon cancellation of common exponential term) the Riccati equation
\[ y^2 (x) + y' (x) + a(x)\, y(x) + b(x) = 0 . \]
linearEq = u''[x] + a[x] u'[x] + b[x] u[x] == 0
u[x_] := Exp[Integrate[y[x], x]]
uPrime[x_] := y[x] Exp[Integrate[y[x], x]]
uDoublePrime[x_] := y[x]^2 Exp[Integrate[y[x], x]] + y'[x] Exp[Integrate[y[x], x]]
riccatiEq = y[x]^2 + y'[x] + a[x] y[x] + b[x] == 0
x Sinh[x] + y[x]^2 + y[x] (-(1/x) + 3 x^2 y[x]) + Derivative[1][y][x] == 0
   
Example 3: Let us consider a linear differential equation of second order witgh variable coefficients \[ u'' + \left( x^2 + 1 \right) u' + \left( \cosh x \right) u = 0 , \] where u′′ and u′ denote second and first derivatives of function u(x), respectively. Using substitution \[ u(x) = \exp \left\{ \int y(x)\,{\text d} x \right\} . \] we calculate derivatives \[ u' = y(x)\,\exp \left\{ \int y(x)\,{\text d} x \right\} , \quad u''(x) = y(x)^2 \,\exp \left\{ \int y(x)\,{\text d} x \right\} + y'(x)\,\exp \left\{ \int y(x)\,{\text d} x \right\} , \] Cancellation of the exponential term yields \[ \exp \left\{ - \int y(x)\,{\text d} x \right\} \left[ u'' + \left( x^2 + 1 \right) u' + \left( \cosh x \right) u \right] = y^2 + y' + \left( x^2 + 1 \right) y + \cosh x = 0 . \] Therefore, function y(x) must satisfy the Riccati equation \[ y^2 + y' + \left( x^2 + 1 \right) y + \cosh x = 0 . \]
linearDEq = u''[x] + (x^2 + 1) u'[x] + Cosh[x] u[x] == 0
u[x_] := Exp[Integrate[y[x], x]]
uPrime[x_] := y[x] Exp[Integrate[y[x], x]]
riccatiEq = y[x]^2 + y'[x] + (x^2 + 1) y[x] + Cosh[x] == 0; FullSimplify[%]
Cosh[x] + y[x] (1 + x^2 + y[x]) + Derivative[1][y][x] == 0
   ■
End of Example 3
The same Euler substitution \eqref{EqRiccati.3} reduces the Riccati equation written in normal form w′ = A(x) + B(x)w² into the self-adjoint second order differential equation
\[ \frac{\text d}{{\text d}x} \left[ r(x)\, y' \right] + A(x)\, y(x) = 0 , \qquad B\,w = - \frac{u'}{u} , \]
where r(x) = 1/B(x).    
Example 4: We consider the Riccati equation in normal form: \[ y' = \cosh x + \left( x^2 +1 \right) y^2 . \tag{4.1} \] Using Euler's substitution \( \displaystyle \quad y = - u'/\left( \left( x^2 + 1 \right) u \right) , \ \) we get \[ y' = \frac{\text d}{{\text d}x} \left( - \frac{u'}{\left( x^2 + 1 \right) u} \right) = - \frac{u''}{\left( x^2 + 1 \right) u} + \frac{2x}{\left( x^2 + 1 \right)^2} \cdot \frac{u'}{u} + \frac{1}{x^2 +1} \left( \frac{u'}{u} \right)^2 = \cosh x + \left( x^2 +1 \right) \left( - \frac{u'}{\left( x^2 + 1 \right) u} \right)^2 . \] Cancellation u′-squared term simplifies the latter: \[ - \frac{u''}{\left( x^2 + 1 \right) u} + \frac{2x}{\left( x^2 + 1 \right)^2} \cdot \frac{u'}{u} = \cosh x . \] Multiplying both sides of the latter by u, we get \[ \frac{\text d}{{\text d}x} \left( \frac{1}{x^2 +1} \cdot \frac{{\text d}u}{{\text d}x} \right) + u\,\cosh x = 0 , \] which is of a self-adjoint form.
riccatiEq = y'[x] == Cosh[x] + (x^2 + 1) y[x]^2
Derivative[1][y][x] == Cosh[x] + (1 + x^2) y[x]^2
eulerSubstitution = y[x] -> -u'[x]/((x^2 + 1) u[x])
y[x] -> -(y[x]/(1 + x^2))
yPrime[x_] := D[-u'[x]/((x^2 + 1) u[x]), x]
simplifiedEq = yPrime[x] == Cosh[x] + (x^2 + 1) (-u'[x]/((x^2 + 1) u[x]))^2
simplifiedEq = simplifiedEq /. eulerSubstitution // Simplify
(2 x y[x] + y[x]^2 + (1 + x^2)^2 ((1 + x^2) Cosh[x] + Derivative[1][y][x]))/(1 + x^2)
selfAdjointForm = D[(1/(x^2 + 1)) u'[x], x] + u[x] Cosh[x] == 0; FullSimplify[%]
\( \displaystyle \quad \frac{e^{\int y[x]\,{\text d}x} \left( -2x\,y[x] + \left( 1 + x^2 \right) y[x]^2 + y'[x] \left( 1 + x^2 \right) \left( \left( 1 + x^2 \right) \cosh [x] + \right) \right)}{1 + x^2} = 0 \)
Canceling the exponential multiple and dividing both sides by (1 + x²), we obtain the same equation written in self-adjoint form: \[ \left( \frac{-2x}{\left( a + x^2 \right)^2}\, y + \frac{y'}{1 + x^2} \right) + \frac{1}{1 + x^2}\, y^2 + \cosh x = 0 . \]    ■
End of Example 4

Since the Riccati equation \eqref{EqRiccati.1} is of the first order, its general solution contains one arbitrary constant. However, the general solution to the second order homogeneous equation \eqref{EqRiccati.4} contains two arbitrary constants. Using Euler's substitution \eqref{EqRiccati.3}, we obtain the general solution of the Riccati equation

\begin{equation} \label{EqRiccati.5} y(x) = \frac{-1}{g(x)} \,\frac{c_1 y'_1 (x) + c_2 y_2 (x)}{c_1 y_1 (x) + c_2 y_2 (x)} \end{equation}
that contains two arbitrary constants c₁ and c₂ that are not both not zero. Here y₁ and y₂ are two linearly independent solutions of the second order differential equation \eqref{EqRiccati.4}. However, the expression above actually depends on the ratio of these two arbitrary constants.
Theorem 1: If g(x) ≠ 0 and all coefficients p(x), g(x), and f(x) are continuous on the interval α ≤ x ≤ β, every solution of the Riccati equation may be written in the form (5), where c₁ and c₂ are arbitrary constants, not both zero, and where y₁ and y₂ are linearly independent solutions of the second order differential equation (4).

Conversely, if y₁ and y₂ are linearly independent solutions of the second order differential equation (4) and if c₁ and c₂ are arbitrary constants, not both zero, the function (5) is a solution of Eq.(1) on any interval in which cy₁(x) + cy₂(x) ≠ 0.

   
Let y₁ be a particular solution to the Riccati equation (1). Then substitution y = y₁ + z reduced the Riccati equation to the Bernoulli equation. Its general solution depends on one arbitrary constant: z = Cϕ + ψ.

Conversely, let ϕ₁, ψ.\, ϕand ψ be arbitrary smooth functions and \[ y = \frac{C\,\phi_1 + \psi_1}{C\,\phi + \psi} , \tag{E.1} \] where C is an arbitrary constant, satisfy a certain first order differential equation. We solve Eq.(E.1) with respect to constant C: \[ C = \frac{\psi_1 - y\,\phi}{y\,\phi - \phi_1} . \] Differentiating this relation, we confirm that y satisfies a differential equation with a quadratic right-hand side, i.e., a Riccati equation.

   
Example 5: Let us consider the Riccati equation \[ y' = \frac{{\text d}y}{{\text d} x} = y^2 + 4 . \tag{5.1} \] Using Euler's substitution y = −w′/w, we reduce it to the linear differential equation \[ -\frac{w''}{w} + \left( \frac{w'}{w} \right)^2 = \left( \frac{w'}{w} \right)^2 + 4 \qquad \iff \qquad w'' + 4\, w = 0. \] The general solution of the latter is \( \displaystyle \quad w = c_1 \sin (2x) + c_2 \cos (2x) , \ \) where c₁ and c₂ are arbitrary constants. The null solution (c₁ = c₂ = 0) leads to no solution because w must be not equal to zero. \[ y = - \frac{2 c_1 \cos (2x) - 2 c_2 \sin (2x)}{c_1 \sin (2x) + c_2 \cos (2x)} , \qquad |c_1 | + |c_2 | > 0. \] The choice c₁ = 0 leads to the particular solution y(x) = tan(2x).

   ■
End of Example 5

It is sometimes possible to find a solution of a Riccati equation by guessing. Without knowing a solution to the Riccati equation, there is no chance of finding its general solution explicitly. If one solution ϕ is known, then substitution w = y - ϕ reduces the Riccati equation to a Bernoulli equation. Another substitution y = ϕ + 1/v also reduces the Riccati equation to a linear type.

Theorem 2: Let y₁(x) be a particular solution of the Riccati equation (1). Then the general solution of Eq.(1) is given by \[ y (x) = y_1 (x) + u(x) , \] where \[ u = \exp\left\{ \int \left( 2g\,y_1 - p \right) {\text d}x \right\} \left[ C - \int \left( 2g\, \exp \left\{ -\int \left( 2g\,y_1 - p \right) {\text d} x \right\} \right) {\text d}x \right]^{-1} , \] and C is an arbitrary constant.
   
Let \[ y = y_1 (x) + u(x) , \] where u is a new dependent variable and y₁(x) is a particular solution of y′ + p(x) y(x) = g(x) y(x)² + f(x). Differentiating y(x) = y₁(x) + u(x), we obtain \[ \frac{{\text d}y}{{\text d}x} = \frac{{\text d}y_1}{{\text d}x} + \frac{{\text d}u}{{\text d}x} . \] Substituting this sum into the Riccati equation, we get \[ \underline{\frac{{\text d}y_1}{{\text d}x}} + \frac{{\text d}u}{{\text d}x} + \underline{p\,y_1} + p\,u = g \left( y_1 + u \right)^2 + f = \underline{g\,y_1^2} + g \left( 2 y_1 u + u^2 \right) + \underline{f(x)} . \] The underlined terms in the left and in the right side can be canceled because y₁(x) is a particular solution satisfying the equation. As a result we obtain the differential equation for the function u(x): \[ \frac{{\text d}u}{{\text d}x} + p\,u = g \left( u^2 + 2u\, y_1 \right) . \tag{E2.1} \] Clearly, this is a Bernoulli equation. We solve it by the Bernoulli method: u = v · w, where v is a solution of the linear part of Eq.(E2.1) \[ \frac{{\text d}v}{{\text d}x} + p\,v = g \,2v\, y_1 , \tag{E2.2} \] and w is the general solution of separable equation \[ v \frac{{\text d}w}{{\text d}x} = g\,v^2 w^2 . \tag{E2.3} \] Since the linear equation (E2.2) is homogeneous, we separate variables and integrate \[ \frac{1}{v}\,{\text d}v = \left( 2g\,y_1 - p \right) {\text d} x \quad \Longrightarrow \quad \ln v = \int \left( 2g\,y_1 - p \right) {\text d} x . \] Since we need just one solution of Eq.(E2.2), we drop a constant of integration. Now we separate variables in Eq.(E2.3) and integrate \[ \frac{{\text d}w}{w^2} = g\,y_1 \,{\text d}x \quad \Longrightarrow \quad - \frac{1}{w} = - C + \int g\, \exp \left\{ -\int \left( 2g\,y_1 - p \right) {\text d} x \right\} \,{\text d}x . \] Multiplication of functions v and w yields the general solution of the Riccati equation.
   
Example 6: Consider the Riccati equation

\[ y' = 2y/x+y^2 -x^4 . \tag{4.1} \]

We plot the separatrix and the direction field

field = StreamPlot[{1, 2*y/x + y^2 - x^4}, {x, -2, 2}, {y, -2, 2}, StreamColorFunction -> "Rainbow", StreamPoints -> 42, StreamScale -> {Full, All, 0.04}];
sol = NDSolve[{y'[x] == 2*y[x]/x + (y[x])^2 - x^4 , y[0.1] == 0.01}, y[x], {x, 0, 2}];
curve = Plot[Evaluate[y[x] /. sol], {x, 0, 2}, PlotRange -> {{0, 2}, {-2, 2}}, PlotStyle -> {Red, Thickness[0.01]}];
Show[field, curve]
         Direction field and separatrix (in red).       Mathematica code.

The given Riccati equation can be solved by substitution \( y =x^2 +1/v(x) , \) where y1 = x² is a particular solution of the given Riccati equation.
R[x_, y_] = (y'[x] - 2 y[x]/x - y[x]^2 + x^4 )
y1[x_] = x^2
R[x, y1]
Simplify[Expand[v[x]^2 R[x, Function[t, t + t/v[t]]]]]
DSolve[% == 0, v[x], x] (* solve linear equation for v *)
y2[x_] = Simplify[(y1[x] + 1/v[x]) /. %[[1]]]
Out[11]= x^4 - (2 y[x])/x - y[x]^2 + Derivative[1][y][x]
Out[12]= x^2
Out[13]= 0
Out[14]= -(1 + 2 x^2) v[x] + (-1 - x^2 + x^4) v[x]^2 - x (x + Derivative[1][v][x])
Out[15]= {{v[x] -> -(x (-(E^((2 x^3)/3 - 1/6 x^2 (3 + 2 x))/(2 x)) + (
E^((2 x^3)/3 - 1/6 x^2 (3 + 2 x)) (1 + x))/(2 x^2) - (
E^((2 x^3)/3 - 1/6 x^2 (3 + 2 x)) (1 + x) ((5 x^2)/3 - 1/3 x (3 + 2 x)))/(
2 x) + (-((E^(-(1/6) x^2 (3 + 2 x)) (-1 + x))/x^2) +
E^(-(1/6) x^2 (3 + 2 x))/x + ( E^(-(1/6) x^2 (3 + 2 x)) (-1 + x) (-(x^2/3) -
1/3 x (3 + 2 x)))/x) C[1]))/((-1 - x^2 + x^4) (-((E^((2 x^3)/3 - 1/6 x^2 (3 + 2 x)) (1 + x))/(
2 x)) + (E^(-(1/6) x^2 (3 + 2 x)) (-1 + x) C[1])/x))}}
Out[16]= (E^((2 x^3)/3) (-1 - x + x^2) + 2 (-1 + x + x^2) C[1])/(E^((
2 x^3)/3) + 2 C[1])
   ■
End of Example 6

Riccati Equations in Normal form


When coefficients of the Riccati equation have certain special properties, they can be solved by quadrature. We do not pursue this topic and consider one special case. More details can be found in references.

Suppose that g(x) ≡ 1 and other coefficients p(x) and f(x) of the Riccati equation \eqref{EqRiccati.1} are polynomials in variable x. If the degree of the polynomial \( \displaystyle \quad \Delta = p^2 + 2\,p' - 4\,f \quad \) is odd, the Riccati equation cannot possess a polynomial solution. If the degree of Δ is even, the equation involved may possess only the following polynomial solutions

\[ y = \frac{1}{2} \left( p(x) \pm \left\lfloor \sqrt{\Delta} \right\rfloor \right) , \]
where \( \displaystyle \quad \left\lfloor \sqrt{\Delta} \right\rfloor \ \) denotes an integer rational part of the expansion of \( \displaystyle \quad \sqrt{\Delta} \ \) in decreasing powers of x (for example \( \displaystyle \quad \left\lfloor \sqrt{x^2 - 3x + 7} \right\rfloor = x - 3/2 \) ).    

Example 7: : Consider the Riccati equation

\[ y' = y^2 -x . \tag{7.1} \]

      We plot the nullcline (in black) y² = x and phase portrait) for the Riccati equation \( y' = y^2 - x. \)
dfield = VectorPlot[{1, y^2 - x}, {x, -2, 2}, {y, -2, 2}, Axes -> True, VectorScale -> {Small, Automatic, None}, AxesLabel -> {"x", "dydx=y^2 - x^2"}];
l1 = Plot[Sqrt[x], {x, -2, 2}, PlotStyle -> {Thickness[0.01], Black}];
l2 = Plot[-Sqrt[x], {x, -2, 2}, PlotStyle -> {Thickness[0.01], Black}];
sol1 = NDSolve[{y'[x] == (y[x])^2 - x, y[-1.5] == -1.5}, y, {x, -2, 2}];
s1 = Plot[y[x] /. sol1, {x, -2, 2}, PlotStyle -> {Thickness[0.008], Blue}, PlotRange -> {-2, 2}]
sol2 = NDSolve[{y'[x] == (y[x])^2 - x, y[-2.0] == -1.6}, y, {x, -2, 2}];
s2 = Plot[y[x] /. sol2, {x, -2, 1.5}, PlotStyle -> {Thickness[0.008], Blue}, PlotRange -> {-2, 2}]
sol3 = NDSolve[{y'[x] == (y[x])^2 - x, y[0] == 0.6}, y, {x, -2, 2}];
s3 = Plot[y[x] /. sol3, {x, -2, 2}, PlotStyle -> {Thickness[0.008], Blue}, PlotRange -> {-2, 2}]
sol4 = NDSolve[{y'[x] == (y[x])^2 - x, y[-1.0] == -1.5}, y, {x, -2, 2}];
s4 = Plot[y[x] /. sol4, {x, -2, 2}, PlotStyle -> {Thickness[0.008], Blue}, PlotRange -> {-2, 2}]
Show[s1, s2, l1, l2, dfield, s3, s4, AspectRatio -> 1]
       The nullcline and phase portrait for the Riccati equation (7.1).            Mathematica code

Since the polynomial Δ = 4x for Eq.(7.1) is of first degree, this Riccati equation has no polynomial solution. However, Euler's substitution y = −u′/u reduces the given Riccati equation to a linear differential equation: \[ u'' + x\, u = 0 . \tag{7.2} \] Its general solution is expressed through Airy functions (these functions are discussed in APMA0340 course, see section) \[ u(x) = c_1 \mbox{Ai}\left( (-1)^{1/3} x \right) + c_2 \mbox{Bi}\left( (-1)^{1/3} x \right) \tag{7.3} \]

DSolve[u''[x] + x*u[x] == 0, u[x], x]
{{u[x] -> AiryAi[(-1)^(1/3) x] C[1] + AiryBi[(-1)^(1/3) x] C[2]}}
Mathematica is capable to solve the Riccati equation (7.1), but it expresses it through Bessel functions of pure imaginary argument.
DSolve[y'[x] == (y[x])^2 - x, y[x], x]
BesselJ[-(4/3), 2/3 I x^(3/2)] C[1] + BesselJ[2/3, 2/3 I x^(3/2)] C[ 1]))/(2 x (BesselJ[1/3, 2/3 I x^(3/2)] + BesselJ[-(1/3), 2/3 I x^(3/2)] C[1]))}}

Now we consider another Riccati equation \[ y' + \left( 2 x^2 -1 \right) y = y^2 + 4x . \tag{7.4} \] Then the polynomial \[ \Delta = \left( 2 x^2 -1 \right)^2 + 2x - 4\,4x = 4 x^4 - 4 x^2 -12x +1 \] is of degree 4 (even). So we expect that the Riccati equation (7.4) may have a polynomial solution \[ y = a\,x^2 + b\,x + c \qquad \Longrightarrow \qquad y' = 2a\,x + b . \] Substitution into Eq.(7.4), and equating coefficients of like powers, we obtain the solution y = 2 x² − 1. Mathematica also knows the solution

DSolve[y'[x] + (2*x^2 - 1)*y[x] == (y[x])^2 + 4*x, y[x], x]
{{y[x] -> -1 + 2 x^2 + E^(-x + (2 x^3)/3)/( C[1] - Inactive[Integrate][ E^(-K[1] + (2 K[1]^3)/3), {K[1], 1, x}])}}
However, finding a polynomial solution may become problematic when the function f(x) = −x will be replaced by another polynomial, say f(x) = x².    ■
End of Example 7

An equation of the first order and first degree \[ y' = A(x) + B(x)\, y^2 , \qquad B(x) \ne 0, \tag{2} \] is called the Riccati equation in normal form.

From previous discussion, we know that with appropriate substitution, the coefficient B(x) can always be reduced to 1, which leads to to normal form

\[ y' = A(x) + y^2 . \tag{3} \]
Riccati himself was concerned with solutions to so called special equation with constant coefficients
\begin{equation} \label{EqRiccati.6} y' = a\,y^2 + b\,x^{n} , \end{equation}
where n, 𝑎, and b are real numbers. The special Riccati equation \eqref{EqRiccati.6} can be linearized by Euler's substitution \( y' = -u' /(au) . \) This yields
\[ \frac{{\text d}^2 u}{{\text d} x^2} + \left( ab \right) x^n u = 0 . \]
Multiplying both sides of this linear equation by x², we obtain a non-standard Bessel equation that is considered in section of the second course APMA0340:
\begin{equation*} %\label{EqRiccati.7} x^2 \frac{{\text d}^2 u}{{\text d} x^2} + \left( ab \right) x^{n+2} u = 0 , \qquad n\ne -2. \end{equation*}
When n + 2 = 0, Eq.\eqref{EqRiccati.6} admits a particular solution u = k/x. Substituting this function into the special Riccati equation, we obtain
\[ \frac{{\text d}}{{\text d} x} \left( \frac{k}{x} \right) = -\frac{k}{x^2} = a \left( \frac{k}{x} \right)^2 + \frac{b}{x^2} . \]
This leads to the quadratic equation in k:
\[ -k = a\,k^2 +b \qquad \iff \qquad k = \frac{-1\pm \sqrt{1- 4ab }}{2a} . \]
Solve[a*k^2 + k + b == 0, k]
{{k -> (-1 - Sqrt[1 - 4 a b])/(2 a)}, {k -> (-1 + Sqrt[1 - 4 a b])/( 2 a)}}
Then the general solution of Eq.\eqref{EqRiccati.6} for n = −2 becomes
\begin{equation} \label{EqRiccati.7} y = \frac{k}{x} - \frac{1}{\frac{bx}{2bk+1} + c\,x^{-2bk}} . \end{equation}
If n ≠ −2, the general solution of equation \eqref{EqRiccati.7} can be expressed through Bessel functions:
\begin{equation} u(x) = \sqrt{x}\,\begin{cases} c_1 J_{1/2q} \left( \frac{\sqrt{ab}}{q} \, x^q \right) + c_2 Y_{1/2q} \left( \frac{\sqrt{ab}}{q} \, x^q \right) , & \quad \mbox{if } ab> 0, \\ c_1 I_{1/2q} \left( \frac{\sqrt{-ab}}{q} \, x^q \right) + c_2 K_{1/2q} \left( \frac{\sqrt{-ab}}{q} \, x^q \right) , & \quad \mbox{if } ab< 0, \end{cases} \label{EqRiccati.9} \end{equation}
where \( q= 1+ n/2 , \quad 2q = n+2 = \nu^{-1} , \ \) and Jν(t), Yν(t) are Bessel functions of first and second kind, respectively, , while Iν(t), Kν(t) are modified Bessel functions. Note that the general solution depends on the ratio c₁/c₂ (if c₂ ≠ 0) or c₂/c₁ (if c₁ ≠ 0) of two arbitrary constants c₁ and c₂.

Example 8: The standard Riccati equation

\[ y' = x^2 + y^2 \tag{8.1} \]
Actually, Mathematica knows the solution:
DSolve[{y'[x] == x^2 + (y[x])^2, y[0] == a}, y[x], x]
{{y[x] -> (-a x^2 BesselJ[-(3/4), x^2/2] Gamma[1/4] + x^2 BesselJ[-(5/4), x^2/2] Gamma[3/4] + BesselJ[-(1/4), x^2/2] Gamma[3/4] - x^2 BesselJ[3/4, x^2/2] Gamma[3/ 4])/(x (a BesselJ[1/4, x^2/2] Gamma[1/4] - 2 BesselJ[-(1/4), x^2/2] Gamma[3/4]))}}
So the solution of Eq.(8.1) subject to the initial condition y(0) = 𝑎 is
\begin{equation*} %\label{EqRiccati.5a} y(x) = x\,\frac{-a\, J_{-3/4} \left( x^2 /2 \right) \Gamma \left( \frac{1}{4} \right) + J_{-5/4} \left( x^2 /2 \right) \Gamma \left( \frac{3}{4} \right) - J_{3/4} \left( x^2 /2 \right) \Gamma \left( \frac{3}{4} \right)}{a\, J_{1/4} \left( x^2 /2 \right) \Gamma \left( \frac{1}{4} \right) - 2\,J_{-1/4} \left( x^2 /2 \right) \Gamma \left( \frac{3}{4} \right)} . \tag{8.2} \end{equation*}
Eq.(8.1) can be linearized with the aid of Euler's substitution \( y= -u' /u ; \) the differential equation for u has the general solution
\[ u(x) = \sqrt{x} \left[ C_1 J_{1/4} \left( x^2 /2 \right) + C_2 Y_{1/4} \left( x^2 /2 \right) \right] , \]
where C₁ and C₂ are arbitrary constants (not both zero). The solution of the Riccati equation subject to the homogeneous initial condition y(0) = 0 is known from Eq.(8.2):
\begin{equation*} y(x) = x\,\frac{-Y_{-3/4} \left( \frac{x^2}{2} \right) + J_{-3/4} \left( \frac{x^2}{2} \right)}{Y_{1/4} \left( \frac{x^2}{2} \right) - J_{1/4} \left( \frac{x^2}{2} \right)} . %\label{Eq.riccati.7} \end{equation*}
In particular, there are forms of solution to the initial value problem \( y' = x^2 + y^2 , \quad y(0) = 0 \)
\begin{equation*} %\label{EqRiccati.6a} y(x) = x\,\frac{-Y_{-3/4} \left( \frac{x^2}{2} \right) + J_{-3/4} \left( \frac{x^2}{2} \right)}{Y_{1/4} \left( \frac{x^2}{2} \right) - J_{1/4} \left( \frac{x^2}{2} \right)} = x\,\frac{J_{3/4} \left( \frac{x^2}{2} \right)}{J_{-1/4} \left( \frac{x^2}{2} \right)} . \end{equation*}
This solution blows up at x ≈ 2.00315 where the denominator is zero.
sol = DSolveValue[{y'[x] == x^2 + (y[x])^2, y[0] == a}, y[x], x];
FindRoot[Denominator[sol /. a -> 0] == 0, {x, 2}]
2.003147359426885
The denominator in formula of Euler's substitution has infinite (countable) many roots, which is seen from its plot
Plot[Evaluate[Denominator[sol /. a -> 0]], {x, 0, 10}, PlotTheme -> "Web"]
Denominator plot.
We plot a family of standard Riccati equation under different initial conditions with the following Mathematica code:
soln = DSolve[{y'[x] == y[x]^2 + x^2, y[0] == C}, y[x], x]
hold = Part[soln, 1, 1, 2]
dansoln = FullSimplify[ hold /. {C -> 0} ]
danseries = FullSimplify[Series[dansoln, {x, 0, 15}]]
dansoln /. {x-> 0.5}
Plot[dansoln, {x,-5,5}, PlotStyle->{Thick, Blue}]
0.0417911
Now we test how Mathematica handle discontinuities (singular points) while solving the initial value problems numerically. It turns out that NDSolve command provides a very accurate approximation of the critical point:
x0 = Last[ Reap[NDSolve[{y'[x] == x^2 + (y[x])^2, y[0] == 0}, y, {x, 0, 3}, EvaluationMonitor :> Sow[x]]][[2, 1]]] // Quiet
2.0031471284132385
which is accurate up to 7 decimal places.

Next, we plot singular points for the family of singular solutions of the initial value problems

\[ y' = x^2 + y^2 , \qquad y(0) = a . \]
A singular point x0 where solution blows up is plotted together with the corresponding initial condition giving a set of points on the plane: (x0, 𝑎).
data = {#, Quiet[Last[ Reap[NDSolve[{y'[x] == x^2 + (y[x])^2, y[0] == #}, y, {x, 0, 10}, EvaluationMonitor :> Sow[x]]][[2, 1]]]]} & /@ Range[-3, 3, 0.05]
ListPlot[data] ListPlot[{#, Quiet[Last[ Reap[NDSolve[{y'[x] == x^2 + (y[x])^2, y[0] == #}, y, {x, 0, 10}, EvaluationMonitor :> Sow[x]]][[2, 1]]]]} & /@ Range[-3, 3, 0.05]]
If # is in the 2nd place in the list {...}, the order of the list is changed.
{Quiet[Last[ Reap[NDSolve[{y'[x] == x^2 + (y[x])^2, y[0] == #}, y, {x, 0, 10}, EvaluationMonitor :> Sow[x]]][[2, 1]]]], #} & /@ Range[-3, 3, 0.05]
ListPlot[{#} & /@ data, PlotStyle -> (If[#[[1]] < 0, Red, Green] & /@ data)]
     
Solution diagram.       Solution diagram with extra #.
ListLinePlot[{#, Quiet[Last[ Reap[NDSolve[{y'[x] == x^2 + (y[x])^2, y[0] == #}, y, {x, 0, 10}, EvaluationMonitor :> Sow[x]]][[2, 1]]]]} & /@ Range[-3, 3, 0.05], Filling -> Axis, FillingStyle -> Yellow]
Existence solution diagram.

 

The initial value problem \( y' = y^2 + x^2 , \quad y(0) =0 \) has two forms of solutions \eqref{EqRiccati.6} expressed through Bessel functions of the first kind only:
\begin{equation*} y(x) = x\,\frac{J_{3/4} \left( \frac{x^2}{2} \right)}{J_{-1/4} \left( \frac{x^2}{2} \right)} = x\,\frac{-Y_{-3/4} \left( \frac{x^2}{2} \right) + J_{-3/4} \left( \frac{x^2}{2} \right)}{Y_{1/4} \left( \frac{x^2}{2} \right) - J_{1/4} \left( \frac{x^2}{2} \right)}. \end{equation*}
To convince you that this is the same solution presented in another form, we do two things. First, we expand the above solution into Maclarin series and compare it with true expansion. Second, we verify that our function satisfies the given Riccati equation and the initial condition because the solution of such initial value problem is unique.

AsymptoticDSolveValue[{y'[x] == x^2 + (y[x])^2, y[0] == 0}, y[x], {x, 0, 16}]
\[ y(x) = \frac{1}{3}\, x^3 + \frac{1}{63}\, x^7 + \frac{2}{2079}\, x^{11} + \frac{13}{218295}\,x^{15} + O\left( x^{16} \right) \] First, we define two solution-functions in Mathematica:
f[x_] = x*BesselJ[3/4, x^2 /2]/BesselJ[-1/4, x^2 /2];
g[x_] = x*(BesselJ[-3/4, x^2 /2] - BesselY[-3/4, x^2 /2])/(BesselY[1/4, x^2 /2] - BesselJ[1/4, x^2 /2]);
We check their power series:
N[Series[f[x], {x, 0, 16}]]
N[Series[g[x], {x, 0, 16}]]
Then we check whether these two functions are solutions of the Riccati equation (8.2):
FullSimplify[D[f[x], x] - x^2 - (f[x])^2]
FullSimplify[D[g[x], x] - x^2 - (g[x])^2]
Finally, we check the initial conditions
f[0]
Limit[g[x], x -> 0]

In order to determine where the solution blows up, we find the zero of the denominator:

FindRoot[BesselY[1/4, x^2 /2] - BesselJ[1/4, x^2 /2], {x, 2.0}]
{x -> 2.003147359426885}
FindRoot[BesselJ[-1/4, x^2 /2], {x, 2.0}]
{x -> 2.003147359426885}
Then we evaluate some numerical values:
f[x_] = x*BesselJ[3/4, x^2 /2]/BesselJ[-1/4, x^2 /2];
g[x_] = x*(BesselJ[-3/4, x^2 /2] - BesselY[-3/4, x^2 /2])/(BesselY[1/4, x^2 /2] - BesselJ[1/4, x^2 /2]);
N[f[0.75]-g[0.75]]
-2.77556*10^-17
which is numerically zero.

The solution of the Riccati equation \( \quad y' = y^2 + x^2 \) subject to the initial condition y(0) = 1 is

\[ y(x) = x\,\frac{\left( \pi - \Gamma^2 \left( \frac{3}{4} \right) \right) J_{-3/4} \left( \frac{x^2}{2} \right) - \Gamma^2 \left( \frac{3}{4} \right) Y_{-3/4} \left( \frac{x^2}{2} \right)}{\left( \pi - \Gamma^2 \left( \frac{3}{4} \right) \right) J_{1/4} \left( \frac{x^2}{2} \right) + \Gamma^2 \left( \frac{3}{4} \right) Y_{1/4} \left( \frac{x^2}{2} \right)} . \tag{1.2} \]
This solution blows up at x ≈ 0.969811 where the denominator is zero.
DSolve[{u'[x] == x^2 + (u[x])^2 , u[0] == 1}, u, x]
FindRoot[BesselJ[1/4, x^2/2] Gamma[1/4] - 2 BesselJ[-(1/4), x^2/2] Gamma[3/4] == 0, {x, 1}]
{x -> 0.969811}
The solution of the initial value problem
\[ y' = x^2 + y^2 , \qquad y(0) = -1 , \]
is
\[ y(x) = x\,\frac{\left( \pi + \Gamma^2 \left( \frac{3}{4} \right) \right) J_{-3/4} \left( \frac{x^2}{2} \right) - \Gamma^2 \left( \frac{3}{4} \right) Y_{-3/4} \left( \frac{x^2}{2} \right)}{\left( \pi + \Gamma^2 \left( \frac{3}{4} \right) \right) J_{1/4} \left( \frac{x^2}{2} \right) + \Gamma^2 \left( \frac{3}{4} \right) Y_{1/4} \left( \frac{x^2}{2} \right)} . \tag{1.3} \]
This solution blows up at x ≈ 4.1785 where the denominator is zero.
DSolve[{u'[x] == x^2 + (u[x])^2 , u[0] == -1}, u, x]
FindRoot[ BesselJ[1/4, x^2/2] Gamma[1/4] + 2 BesselJ[-(1/4), x^2/2] Gamma[3/4] == 0, {x, 4}]
{x -> 4.17851}
   ■
End of Example 8

 

We can find the limit of the solution to the special Riccati equation when x → 0:
\begin{align} \lim_{x\to 0} y(x) &= \frac{\sqrt{ab}}{8\,\Gamma^2 \left( \frac{1}{4} \right) \Gamma \left( \frac{5}{4} \right)} \left[ \sqrt{2}\, \Gamma \left( \frac{1}{4} \right) \left( \Gamma \left( -\frac{1}{4} \right) - 4\,\Gamma \left( \frac{3}{4} \right) \right) \Gamma \left( \frac{5}{4} \right) -2k\pi \left( \Gamma \left( \frac{1}{4} \right) + 4\, \Gamma \left( \frac{5}{4} \right)\right) \right] \notag \\ &= -\frac{\sqrt{ab}}{\pi} \,\Gamma^2 \left( \frac{3}{4} \right) \left( 1+ k\right) \approx -0.477989 \left( 1 + k \right) \sqrt{ab} . \label{Eq.riccati.6} \end{align}

Example 9: We consider a problem similar to the previous example that involves another standard Riccati equation

\[ y' = x^2 - y^2 . \tag{9.1} \]
Its solution is expressed as \( y= u' /u , \) where
\[ u(x) = \sqrt{x} \left[ C_1 I_{1/4} \left( x^2 /2 \right) + C_2 K_{1/4} \left( x^2 /2 \right) \right] , \]

When we impose the homogeneous conditions y(0) = 0, then its solution becomes
\[ y(x) = x\,\frac{I_{-3/4} \left( \frac{x^2}{2} \right) \pi \sqrt{2} - 2\,K_{3/4} \left( \frac{x^2}{2} \right)}{\pi \sqrt{2}\,I_{1/4} \left( \frac{x^2}{2} \right) + 2\, K_{1/4} \left( \frac{x^2}{2} \right)} . \tag{9.2} \]
The initial value problem
\[ y' = x^2 - y^2 , \qquad y(0) =1 , \tag{9.3} \]
has the solution
\[ y(x) = x\,\frac{\left( \pi + \Gamma^2 \left( \frac{3}{4} \right) \sqrt{2} \right) I_{-3/4} \left( \frac{x^2}{2} \right) \pi - 2\, \Gamma^2 \left( \frac{3}{4} \right) K_{3/4} \left( \frac{x^2}{2} \right)}{\left( \pi + \Gamma^2 \left( \frac{3}{4} \right) \sqrt{2} \right) I_{1/4} \left( \frac{x^2}{2} \right) \pi + 2\,\Gamma^2 \left( \frac{3}{4} \right) K_{1/4} \left( \frac{x^2}{2} \right)} . \tag{9.4} \]
      We plot the separatrix (in red) for the Riccati equation \( y' = x^2 - y^2 \) that separates solutions approaching +∞ from solutions that go to -∞.
sp = StreamPlot[{1, x^2 - y^2}, {x, -2, 2}, {y, -2, 1}, StreamScale -> Medium, LabelStyle -> {FontSize -> 18, Black}];
curve = NDSolve[{y'[x] == x^2 - (y[x])^2, y[0] == -0.7134}, y, {x, -2, 2}];
cp = Plot[Evaluate[y[x] /. curve], {x, -2, 2}, PlotStyle -> {Thickness[0.015], Red}];
Show[sp, cp]
       Separatrix for the Riccati equation (2.1).            Mathematica code

If we consider a similar Riccati equation
\[ y' = y^2 - x^2 \tag{2.5} \]
its direction field is very similar. Th4e solution of Eq.(2.5) subject to the homogeneous initial condition is
\[ y = x\, \frac{I_{-3/4} \left( \frac{x^2}{2} \right) \pi\sqrt{2} - 2\, K_{3/4} \left( \frac{x^2}{2} \right)}{\pi\sqrt{2}\,I_{1/4} \left( \frac{x^2}{2} \right) + 2\, K_{1/4} \left( \frac{x^2}{2} \right)} , \qquad y(0) = 0. \tag{9.6} \]
However, for another initial condition, its solution becomes
\[ y = x\, \frac{I_{-3/4} \left( \frac{x^2}{2} \right) \pi \left( \sqrt{2}\,\Gamma^2 \left( \frac{3}{4} \right) + \pi \right) - 2\, K_{3/4} \left( \frac{x^2}{2} \right) \Gamma \left( \frac{3}{4} \right)}{\pi \left( \sqrt{2}\,\Gamma^2 \left( \frac{3}{4} \right) + \pi \right) I_{1/4} \left( \frac{x^2}{2} \right) + 2\, K_{1/4} \left( \frac{x^2}{2} \right) \Gamma \left( \frac{3}{4} \right)} , \qquad y(0) = 1. \tag{9.7} \]
      We plot two nullclines (in black) for the Riccati equation \( y' = y^2 - x^2 \) and some typical solutions.
dfield = VectorPlot[{1, y^2 - x^2}, {x, -2, 2}, {y, -2, 2}, Axes -> True, VectorScale -> {Small, Automatic, None}, AxesLabel -> {"x", "dydx=y^2 - x^2"}]
l1 = Plot[x, {x, -2, 2}, PlotStyle -> {Thickness[0.01], Black}];
l2 = Plot[-x, {x, -2, 2}, PlotStyle -> {Thickness[0.01], Black}];
sol1 = NDSolve[{y'[x] == (y[x])^2 - x^2, y[-1.5] == -1.5}, y, {x, -2, 2}]
s1 = Plot[y[x] /. sol1, {x, -2, 2}, PlotStyle -> {Thickness[0.008], Blue}, PlotRange -> {-2, 2}]
sol2 = NDSolve[{y'[x] == (y[x])^2 - x^2, y[-1.0] == 1.6}, y, {x, -2, 2}]
s2 = Plot[y[x] /. sol2, {x, -2, 2}, PlotStyle -> {Thickness[0.008], Blue}, PlotRange -> {-2, 2}]
sol3 = NDSolve[{y'[x] == (y[x])^2 - x^2, y[0] == 0.6}, y, {x, -2, 2}]
s3 = Plot[y[x] /. sol3, {x, -2, 2}, PlotStyle -> {Thickness[0.008], Blue}, PlotRange -> {-2, 2}]
sol4 = NDSolve[{y'[x] == (y[x])^2 - x^2, y[-1.0] == -1.0}, y, {x, -2, 2}]
s4 = Plot[y[x] /. sol4, {x, -2, 2}, PlotStyle -> {Thickness[0.008], Blue}, PlotRange -> {-2, 2}]
Show[s1, s2, l1, l2, dfield, s3, s4, AspectRatio -> 1]
       Two nullclines for the Riccati equation (2.5).            Mathematica code

   ■
End of Example 9
Theorem 3 (Bernoulli--Liouville). The special Riccati equation \( y' = a\,y^2 + b\, x^{n} \) can be integrated in closed form via elementary functions if and only if
\begin{equation} \label{EqRiccati.11} n = \frac{4k-4}{1-2k} \qquad \mbox{for} \quad k \in \mathbb{Z} = \left\{ 0 , \pm 1, \pm 2, \ldots \right\} . \end{equation}
From Eq.(9), we know that the general solution of the special Riccati equation (7) is expressed through Bessel functions (first and second kind) of index ν = 2/(n + 2). From section of the second course APMA0340, we know that Bessel functions are expressed through elementary functions only when their index is half integer. Solving equation 2/(n + 2) == 1/2 + k, we obtain the required formula.
Solve[1/(n + 2) == k-1/2, n]
{{n -> -((4 (-1 + k))/(-1 + 2 k))}}
   ▣
In the same volume of Acta Eruditorum containing the famous Riccati's equation, there was published an article by 24-year-old Daniel Bernoulli, in which he wrote that all three members of the Bernoulli family (Nicolas, Johann, and himself) independently had found conditions for the parameter n under which the special Riccati equation \eqref{EqRiccati.6} is integrable in elementary functions:
\[ \frac{n}{2n + 4} \quad \mbox{is an integer}. \tag{11a} \]
This formula is actually equivalent to Eq.(11). When k → ∞ in Eq.(11), we obtain the value of n to be −2, for which the special Riccati equation \eqref{EqRiccati.6} is also integrable in elementary functions. In 1841, J. Liouville proved that the special Riccati equation \eqref{EqRiccati.6} is not integrable in elementary functions for any other values of parameter n. ▣    
Example 10: We check the conclusion of Theorem 3 with a series particular cases. In order to simplify our job, we employ Mathematica.

Setting k = 0 in Eq.(11), we get n = −4. Then the special Riccati equation becomes \[ y' = \frac{{\text d}y}{{\text d}x} = y^2 + \frac{4}{x^4} . \tag{10.1} \]

DSolve[y'[x] == (y[x])^2 + 4/x^4 , y[x], x];
Simplify[%]
{{y[x] -> (-((x + 2 C[1]) Cos[2/x]) + (-2 + x C[1]) Sin[2/x])/( x^2 (Cos[2/x] - C[1] Sin[2/x]))}}
So the general solution of Eq.(10.1) is \[ y = \frac{\left( x\,c -2 \right) \sin \left( \frac{2}{x} \right) - \left( x + 2c \right) \cos \left( \frac{2}{x} \right)}{x^2 \cos \left( \frac{2}{x} \right) - x^2 c\,\sin \left( \frac{2}{x} \right)} . \]
mysol[x_, a_, b_, n_] := Module[{k = 1 + n/2, w, y}, w = Sqrt[ x]*(C[1]*BesselJ[1/(2*k), 1/k*Sqrt[a*b]*x^k] + C[2]*BesselY[1/(2*k), 1/k*Sqrt[a*b]*x^k]); y = Simplify[-D[w, x]/(a*w)]; y = y /. C[1] -> 1; y ]; a = 1; b = 1; n = -4; ode = y'[x] == a*y[x]^2 + b*x^n; sol = mysol[x, a, b, n] ode /. y -> Function[{x}, Evaluate[sol]];
sol = mysol[x, 1, 4, -4]

However, if you try to solve \[ y' = y^2 + \frac{1}{x^2} , \] Mathematica will provide you a complex-valued answer: \[ y = \frac{{\bf j} \left( - x^{{\bf j}\sqrt{3}} - 3\,c \right) \sqrt{3} + 3\,x^{{\bf j}\sqrt{3}} + 3\,c}{2 \left( {\bf j}\,x^{{\bf j} \sqrt{3}} \sqrt{3} - 3\,c \right) x} , \] where c is an arbitrary constant and j is the unit imaginary vector on the complex plane ℂ so j² = −1. The complex-valued solution is obtained from formula (8) because \( \displaystyle \quad k = - \frac{1}{2} + {\bf j} \,\frac{\sqrt{3}}{2} \quad \) is a solution of quadratic equation k² + k + 1 = 0.

DSolve[y'[x] == (y[x])^2 + 1/x^2, y[x], x]
{{y[x] -> (-1 + I Sqrt[3] (-1 + (2 C[1])/(x^(I Sqrt[3]) + C[1])))/( 2 x)}}
When n = 0, we get a separable equation \[ y' = y^2 +1 \qquad \Longrightarrow \qquad y = \tan \left( x + c \right) , \] where c is an arbitrary constant.

Now we set k = 2 in Eq.(11), so n = −4/3, and we have \[ y' = \frac{{\text d}y}{{\text d}x} = y^2 + \frac{1}{x^{4/3}} . \tag{10.2} \] Its solution is \[ y = x^{-1/3} \,\frac{3\,\sin \left( 3\,x^{1/3} \right) - 3c\,\cos \left( 3\,x^{1/3} \right)}{\left( 3 x^{1/3} + c \right) \cos \left( 3\,x^{1/3} \right) + \left( 3 x^{1/3} c -1 \right) \sin \left( 3\,x^{1/3} \right)} . \]

DSolve[y'[x] == (y[x])^2 + 1/x^(4/3), y[x], x]
{{y[x] -> (-3 C[1] Cos[3 x^(1/3)] + 3 Sin[3 x^(1/3)])/( x^(1/3) ((3 x^(1/3) + C[1]) Cos[ 3 x^(1/3)] + (-1 + 3 x^(1/3) C[1]) Sin[3 x^(1/3)]))

If we set k = −1 in Eq.(11), so n = −8/3, and we have \[ y' = \frac{{\text d}y}{{\text d}x} = y^2 + \frac{1}{x^{8/3}} . \tag{10.3} \] Its solution is \[ y = x^{-4/3}\,\frac{\left( 3 - x^{2/3} + 3 x^{1/3} c \right) \cos \left( 3\,x^{-1/3} \right) - \left( 3 x^{1/3} -3c + x^{2/3} c \right) \sin \left( 3\,x^{-1/3} \right)}{\left( x^{1/3} -3c \right) \cos \left( 3\,x^{-1/3} \right) + \left( 3 + x^{1/3} c \right) \sin \left( 3\,x^{-1/3} \right)} . \]

DSolve[y'[x] == (y[x])^2 + 1/x^(8/3), y[x], x]
{{y[x] -> ((3 - x^(2/3) + 3 x^(1/3) C[1]) Cos[3/x^( 1/3)] - (3 x^(1/3) + (-3 + x^(2/3)) C[1]) Sin[3/x^(1/3)])/( x^(4/3) ((x^(1/3) - 3 C[1]) Cos[3/x^( 1/3)] + (3 + x^(1/3) C[1]) Sin[3/x^(1/3)]))}}

When k = 3, so n = −8/5, we have \[ y' = \frac{{\text d}y}{{\text d}x} = y^2 + \frac{1}{x^{8/5}} . \tag{10.4} \] Its solution is \[ y = 5\,\frac{\left( c - 5\,x^{1/5} \right) \cos \left( 5\,x^{1/5} \right) + \left( 1 + 5\,x^{1/5} c \right) \sin \left( 5\,x^{1/5} \right)}{\left( 15 x^{4/5} -3 x^{3/5} c +25 x\,c \right) \cos \left( 5\,x^{1/5} \right) + x^{3/5} \left( 25 x^{2/5} -3 - 15 x^{1/5} c \right) \sin \left( 5\,x^{1/5} \right)} . \]

DSolve[y'[x] == (y[x])^2 + 1/x^(8/5), y[x], x]
{{y[x] -> (5 ((-5 x^(1/5) + C[1]) Cos[ 5 x^(1/5)] + (1 + 5 x^(1/5) C[1]) Sin[5 x^(1/5)]))/((15 x^( 4/5) - 3 x^(3/5) C[1] + 25 x C[1]) Cos[5 x^(1/5)] + x^(3/5) (-3 + 25 x^(2/5) - 15 x^(1/5) C[1]) Sin[5 x^(1/5)])}}
If k = −2, so n = −12/5, we have \[ y' = \frac{{\text d}y}{{\text d}x} = y^2 + \frac{1}{x^{12/5}} . \tag{10.5} \] Its solution is \[ y = x^{-6/5}\,\frac{\left( 25\,c - 3 x^{1/5} \left( x^{2/5} -10 + 5 x^{1/5} \right) c \right) \cos \left( 5\,x^{1/5} \right) + \left( 25 -15 x^{2/5} + 3 \left( x^{2/5} -10 \right) x^{1/5} c \right) \sin \left( 5\,x^{1/5} \right)}{\left( 3 x^{2/5} -25 + 15 x^{1/5} c \right) \cos \left( 5\,x^{1/5} \right) + \left( 15 x^{1/5} + 25 \,c -3 x^{2/5} c \right) \sin \left( 5\,x^{1/5} \right)} . \]
DSolve[y'[x] == (y[x])^2 + 1/x^(12/5), y[x], x]
{{y[x] -> ((25 C[1] - 3 x^(1/5) (-10 + x^(2/5) + 5 x^(1/5) C[1])) Cos[ 5/x^(1/5)] + (25 - 15 x^(2/5) + 3 (-10 + x^(2/5)) x^(1/5) C[1]) Sin[5/x^(1/5)])/(x^( 6/5) ((-25 + 3 x^(2/5) + 15 x^(1/5) C[1]) Cos[5/x^( 1/5)] + (15 x^(1/5) + 25 C[1] - 3 x^(2/5) C[1]) Sin[5/x^( 1/5)]))}}

A similar situation is observes when one of the coefficients (𝑎 or b or both in Eq.(7)) is negative. For instance, the special Riccati equation \[ y' = \frac{{\text d}y}{{\text d}x} = y^2 - \frac{1}{x^{8/3}} . \tag{10.6} \] has the general solution \[ y = x^{-4/3}\, \frac{\left( 3 x^{1/3} c + 3{\bf j} + x^{2/3} {\bf j} \right) \cosh \left( 3 x^{-1/3} \right) - \left( 3\,{\bf j} x^{1/3} + \left( 3 + x^{2/3} \right) c \right) \sinh \left( 3 x^{-1/3} \right) }{\left( 3\,{\bf j} + x^{1/3} \right) \sinh \left( 3 x^{-1/3} \right) - \left( 3\,c + {\bf j}\,x^{1/3} \right) \cosh \left( 3 x^{-1/3} \right) } , \] where j is the unit imaginary vector on the complex plane ℂ so j² = −1.

DSolve[y'[x] == (y[x])^2 - 1/x^(8/5), y[x], x]
{{y[x] -> ((I (3 + x^(2/3)) + 3 x^(1/3) C[1]) Cosh[3/x^( 1/3)] - (3 I x^(1/3) + (3 + x^(2/3)) C[1]) Sinh[3/x^( 1/3)])/(x^( 4/3) ((-I x^(1/3) - 3 C[1]) Cosh[3/x^( 1/3)] + (3 I + x^(1/3) C[1]) Sinh[3/x^(1/3)]))}}
   ■
End of Example 10

Example 11: : We consider some modifications of the standard Riccati equation. In particular, we start with the following two:

\[ y' = x^2 +y^2 -xy \tag{5.1} \]

and

\[ y' = x^2 +y^2 +xy . \tag{5.2} \]

First, we plot phase portraits for these equations:

StreamPlot[{1, x^2 + y^2 - x*y}, {x, -2, 2}, {y, -2, 2}, StreamColorFunction -> "Rainbow", StreamPoints -> 42, StreamScale -> {Full, All, 0.04}]
treamPlot[{1, x^2 + y^2 + x*y}, {x, -2, 2}, {y, -2, 2}, StreamColorFunction -> "Rainbow", StreamPoints -> 42, StreamScale -> {Full, All, 0.04}]
      
Phase portrait for y' = x² + y² −xy.        Phase portrait for y' = x² + y² + xy.

Next, we plot existence diagram that is formed by pairs of points (𝑎, b), where x = 𝑎 is a singular point where solution to the initial value problem
\[ y' = x^2 +y^2 pm xy , \qquad y(0) = b, \tag{5.3} \]
blows up.
ListPlot[{#, Quiet[Last[ Reap[NDSolve[{y'[x] == x^2 + (y[x])^2 - x*y[x], y[0] == #}, y, {x, 0, 3}, EvaluationMonitor :> Sow[x]]][[2, 1]]]]} & /@ Range[-3, 3, 0.05]]
For Eq.(5.2), we use table option:
values = Range[-3, 3, 0.05]
solution = Table[ {i, Quiet[ Last[Reap[ NDSolve[{y'[x] == x^2 + (y[x])^2 + x*y[x], y[0] == i}, y, {x, 0, 3}, EvaluationMonitor :> Sow[x]]][[2, 1]]]]}, {i, values} ]
ListPlot[solution]
      
Existence diagram for y' = x² + y² −xy.        Existence diagram for y' = x² + y² + xy.

   ■
End of Example 11

Applications


Because the calculus of variations and optimal control theory serve as the main source for obtaining and interpreting the differential (matrix) Riccati equation, our first example must be devoted to this topic.

Classical calculus of variations originated in 1696 when the famous troublemaker and genius Johann Bernoulli (1667--1748) posed a mathematical challenge. It is known as the brachistochrone problem that asks what shape a curve should be between two points at different elevations so that a frictionless object sliding down it under gravity would reach the lower point in the shortest possible time. Although the brachistochrone problem is discussed in detain in section from the fourth part of this tutorial, it surves as an motivation to the calculus of variation.

Brachistochrone compared to other curves.

Before we formulate optimization problems, it is convenient to introduce the following well-known notation: ℭm([t₀, t₁], ℝn) for the vector space of all continuous vector-valued functions having m first continuous derivatives (in our case, m = 2), so when f : [t₀, t₁] → ℝn is a smooth function, we abbreviate it as f ∈ ℭm (for simplicity). However, for optimization problems, we use its subset ℳ ⊂ ℭm of functions satisfying fixed boundary conditions

\[ f \left( t_0 \right) = \mathbf{a} , \qquad f \left( t_1 \right) = \mathbf{b} , \]
where a and b are two given points in ℝn. With this assumption, ℳ is not a vector space. We also use notation ℭm0 the vector space of all m times differentiable functions that vanish at endpoints: f(t₀) = f(t₁) = 0.

In general, an optimization problem consists of maximizing or minimizing an integral expression known as an action functional

\begin{equation} \label{EqRiccati.12a} J(\mathbf{x}) = \int_{t_0}^{t_a} L \left( t, \mathbf{x}(t), \dot{\mathbf{x}}(t) \right) {\text d}t , \end{equation}
where L ∈ ℭ²([t₀, t₁], ℝn, ℝn) is twice continuously differentiable function. Although three variable (t, x, v) in L are independent, these variables are coupled together because they are evaluated at the same time in the action functional. In mechanics, L is referred to as the Lagrangian (L = K − Π, the difference of the kinetic energy and potential energy). When row vector x is given in generalized coordinates as a contravariant vector in ℝn, x(t) = [q¹, q², … , qn], the action functional is written as
\[ J(\mathbf{x}) = \int_{t_0}^{t_a} L \left( t, q^1 (t), q^2 (t) , \ldots , q^n (t), \dot{q}^1 (t), \dot{q}^2 (t) , \ldots , \dot{q}^n (t) \right) {\text d}t \tag{12} \]
We say that a weak minimum is attained for a curve x(·) ∈ ℳ if there exists δ > 0 such that for any x(·) ∈ ℳ ⊂ ℭ¹([t₀, t₁]) with ∥x(·) − x(·)∥ < δ, x(t₀) = a and x(t₁) = b, the inequality J(x(·)) ≥ J(x(·)) holds.

A function x(·) ∈ ℳ is a global minimizer if J(x) ≤ J(x) for all x ∈ ℳ. If ≤ is replaced by <, it is called a strict minimizer.

Let f : Ω → ℝ be a smooth function (having at least three first derivatives), where Ω ⊆ ℝn. For given two (contavariant) vectors x = [x¹, x², … , xn] and η = [η¹, η², … , ηn], Taylor's multivariable theorem implies, for ε near zero,

\begin{align*} f\left( \mathbf{x} + \varepsilon\,\eta \right) &= f\left( \mathbf{x} \right) + \varepsilon \left( f_{x^1}\left( \mathbf{x} \right) \eta^1 + f_{x^2}\left( \mathbf{x} \right) \eta^2 + \cdots + f_{x^n}\left( \mathbf{x} \right) \eta^n \right) \\ &\quad + \frac{\varepsilon^2}{2} \left( f_{x^1 x^1} \left( \mathbf{x} \right) \left( \eta^1 \right)^2 + 2 \,f_{x^1 x^2} \left( \mathbf{x} \right) \eta^1 \eta^2 + \cdots f_{x^n x^n} \left( \mathbf{x} \right) \left( \eta^n \right)^2 \right) + O \left( \varepsilon^3 \right) \\ &= f\left( \mathbf{x} \right) + \varepsilon \,\nabla f \bullet \eta + \frac{\varepsilon^2}{2} \left\langle \eta \vert {\bf H} \vert \eta \right\rangle + \cdots , \end{align*}
where ∇ is the gradient operator, "•" is the scalar product, and H is the (symmetric) Hessian matrix of f,
\[ \mathbf{H} = \begin{bmatrix} \partial^2_{1,1}f & \partial^2_{1,2} & \cdots & \partial^2_{1,n} f \\ \partial^2_{2,1} f & \partial^2_{2,2} & \cdots & \partial^2_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ \partial^2_{n,1} & \partial^2_{n,2} f & \cdots & \partial^2_{n,n} f \end{bmatrix} . \]
As usual, we denot partial derivatives with subscripys: \( \displaystyle \quad \partial_i f = f_{x^i} = \frac{\partial f}{\partial x^i} \quad\mbox{and} \quad \partial_{i,j}f = f_{x^i x^j} = \frac{\partial^2 f}{\partial x^i x^j} . \quad \) Since the Hessian is a symmetric matrix, it does not matter how you evaluate the inner product (either multiply the Hessian matrix from left or from right before evaluating the inner product).

If x is a critical point of f, then ∇f(x) = 0 and we have

\[ f\left( \mathbf{x}^{\ast} + \varepsilon\,\eta \right) = f\left( \mathbf{x}^{\ast} \right) + \frac{\varepsilon^2}{2} \left\langle \eta \vert {\bf H} \vert \eta \right\rangle + O \left( \varepsilon^3 \right) . \]
In particular, if x is a local minimum, then the inner product ⟨ηHη⟩ > 0 for η ≠ 0. (in this case, we say that matrix H is positive definite). In two-dimensional case this condition is equivalent to
\[ f_{1,1}\left( \mathbf{x}^{\ast} \right) f_{2,2}\left( \mathbf{x}^{\ast} \right) - f_{1,2}^2 \left( \mathbf{x}^{\ast} \right) > 0 \quad\mbox{and} \quad f_{1,1}\left( \mathbf{x}^{\ast} \right) > 0 . \]
This condition reminds us the 2D test from multivariable calculus. It is natural to expect something similar for calculus of variations.

Let us consider the variational problem of minimization of the action functional

\[ J(\mathbf{x} ) = \int_{t_0}^{t_1} L \left( t, \mathbf{x}, \dot{\mathbf{x}} \right) {\text d} t \]
among differentiable functions subject to fixed boundary conditions x(t₀) = a and x(t₁) = b.

We consider an infinitesimal deformation from the extreme x ∈ ℳ

\[ q^i = q^i_0 + \varepsilon\,\eta^i , \qquad \left[ \eta^1 , \ldots , \eta^n \right] \in ℭ^2_0 \left( \left[ t_0 , t_1 \right] , \mathbb{R}^n \right) , \]
where ε is a small parameter and η = [η¹, η², … , ηn] ∈ ℭ20([t₀, t₁], ℝn) is a test function vanishing at end points, η(t₀) = η(t₁) = 0. For velocities we have the corresponding relation
\[ \dot{q}^i = \dot{q}^i_0 + \varepsilon\,\dot{\eta}^i \qquad (i = 1, 2, \ldots , n) . \]
We consider a scalar function ϕ(ε) obtained by substituting defomed curves into the action functional
\begin{equation} \label{EqRiccati.13} \phi (\varepsilon ) = \int_{t_0}^{t_1} L \left[ t, \mathbf{x}^{\ast} + \varepsilon\,\eta , \dot{\mathbf{x}}^{\ast} + \varepsilon\,\dot{\eta} \right] {\text d} t . \end{equation}
We then back to a calculus problem, where x ∈ ℳ is a local minimizer of J(·) if and only if 0 is a local minimum of ϕ(·) because J(x) = ϕ(0). Note that the theorem on differentiability of an integral with respect to a parameter and the the assumption L ∈ ℭ2 imply that the function ϕ(ε) is twice differentiable in ε. Expanding ϕ(·) into truncated Taylor's series, we obtain
\[ \phi (\varepsilon ) = J\left( \mathbf{x}^{\ast} \right) + \varepsilon\,\delta J + \frac{\varepsilon^2}{2}\,\delta^2 J + \cdots , \]
where
\[ \delta J = \left. \frac{{\text d}}{{\text d} \varepsilon} \,\phi (\varepsilon ) \right\vert_{\varepsilon = 0} = \left. \frac{{\text d}}{{\text d} \varepsilon} \,J \left( \mathbf{x}^{\ast} + \varepsilon\,\eta \right) \right\vert_{\varepsilon = 0} \]
is called the first order variation of the acting functional \eqref{EqRiccati.12a} and
\[ \delta^2 J = \left. \frac{{\text d}^2}{{\text d} \varepsilon^2} \,\phi (\varepsilon ) \right\vert_{\varepsilon = 0} = \left. \frac{{\text d}^2}{{\text d} \varepsilon^2} \,J \left( \mathbf{x}^{\ast} + \varepsilon\,\eta \right) \right\vert_{\varepsilon = 0} \]
is its second order variation. The action J is extremal if δJ = 0 for any deformation η ∈ ℭ20([t₀, t₁], ℝn) that keeps endpoints fixed to zero. So the optimality condition of an extreme curve x of J(x) is characterized as δJ = 0. In order to extract conditions on the Lagrangian L(t, x, v) that guarantee extremal x of the action, we need the following lemma that was proved by the German physiologist Emil Heinrich du Bois-Reymond (1818--1896) in 1879.
Lemma 2 (du Bois-Reymond): Let \( \displaystyle \quad a(t), \ b(t), b'(t) \in ℭ([t_0 , t_1 ], \mathbb{R}^n ) , \quad \) and let the condition \[ \int_{t_0}^{t_1} \left[ a(t)\,h(t) + b(t)\, \dot{h} (t) \right] {\text d}t = 0 \] hold for any test function h ∈ ℭ10([t₀, t₁], ℝn). Then the function b(t) satisfy the equation \[ \frac{\text d}{{\text d}t}\, b(t) = a(t) . \]
   
Let A(t) denote any an arbitrary primitive of the function 𝑎(t), i.e., \( \displaystyle \quad A(t) = \int_{t_0}^t a( \tau )\,{\text d}\tau + K, \quad \) for some constant K. Integrating the first summand by parts and taking into account property of h to vanish at end points, we obtain \[ \int_{t_0}^{t_1} \left[ -A(t) + b(t) \right] \dot{h}(t)\,{\text d}t = 0 \tag{E.1} \] (for any choice of constant K). We now choose h(t) such that \[ h(t) = \int_{t_0}^t \left[ -A(\tau ) + b(\tau ) \right] {\text d}\tau . \] Then h(t₀) = 0 and condition h(t₁) = 0 is ensured by the choice of constant K. Substituting h(t) into Eq.(E.1), we get \[ \int_{t_0}^{t_1} \left[ -A(t) + b(t) \right]^2 {\text d} t = 0. \] Hence, A(t) = b(t).

Changing the order of integration and differentiation with respect to parameter ε, we find

\begin{align*} \phi'(0) &= \int_{t_0}^{t_1} \left. \frac{\text d}{{\text d}\varepsilon} \,L\left( t, q_0^1 + \varepsilon\,\eta^1 , q_0^2 + \varepsilon\,\eta^2 , \ldots , \dot{q}_0^1 + \varepsilon\,\dot{\eta}^1 , \dot{q}_0^2 + \varepsilon\,\dot{\eta}^2 , \ldots \right) {\text d} t \right\vert_{\varepsilon = 0} \\ &= \int_{t_0}^{t_1} \left[ \frac{\partial L}{\partial q^1}\, q^1 + \frac{\partial L}{\partial q^2}\, q^2 + \cdots + \frac{\partial L}{\partial \dot{q}^1}\, \dot{q}^1 + \frac{\partial L}{\partial \dot{q}^2}\, \dot{q}^2 + \cdots \right] {\text d} t = 0 . \end{align*}
The latter can be written in succinct (variational) form:
\[ \left\langle L_v (t, \mathbf{x}, \dot{\mathbf{x}}) \mid \dot{\eta} \right\rangle + \left\langle L_x (t, \mathbf{x}, \dot{\mathbf{x}}) \mid \eta \right\rangle = 0, \qquad \forall \eta \in H_0^1 (t_0 , t_1 ) , \]
where H10([t₀, t₁], ℝn) is the closer of ℭ10([t₀, t₁], ℝn), which can be converted into the Sobolev space.

Since coordinates q¹, q², … , qn and their derivatives are independent, we apply the du Bois-Reymond lemma to each component to obtain a system of differential equations
\begin{equation} \label{EqRiccati.14} \frac{\text d}{{\text d}t}\,\frac{\partial}{\partial \dot{q}^i}\, L\left[ t, q^1 , q^2 , \ldots , q^n , \dot{q}^1 , \dot{q}^2 , \ldots , \dot{q}^n \right] = \frac{\partial}{\partial q^i}\, L\left[ t, q^1 , q^2 , \ldots , q^n , \dot{q}^1 , \dot{q}^2 , \ldots , \dot{q}^n \right] , \quad i = 1, 2, \ldots , n. \end{equation}
This system of equations is referred to as the Euler equations or Euler--Lagrange equations. Solutions of the system of n ordinary differential equations of the second order \eqref{EqRiccati.14} must belong to the set ℳ ⊂ ℭ2; therefore, solution curves should have fixed endpoints.

When the lagrangian is independent of t, which is called autonomous, we have the conservation of Hamiltonian

\[ H \left( \mathbf{x} , \dot{\mathbf{x}} \right) = \sum_{i=1}^n L_{\dot{q}^i}\,\dot{q}^i - L \left( \mathbf{x} , \dot{\mathbf{x}} \right) , \]
which is an integral of the Euler equations \eqref{EqRiccati.14}.    
Example 12:

In Cartesiab coordinates, we set the initial point to be the origin and choose the vertical direction to ne downward. Let y = y(x) be the equation of the brachistochrone curve. By the energy conservation law, the speed of the bosy at the point (x, y(x)) is the same as in the free-fall motion from the height y(x), i.e., \( \displaystyle \quad v = \sqrt{2g\,y(x)} . \quad \) Therefore, integrating along the curve y = y(x), we obtain the time of motion \[ \int \frac{{\text d}s}{v} = \int_0^x \frac{\sqrt{1 + \left( \dot{y} (x) \right)^2}}{\sqrt{2g\,y(x)}} \,{\text d} x . \] Hence, the problem is to choose a function y = y(x) such that the conditions y(0) = 0 and y(x₀) = y₀ are satisfied and functional above has the minimum value.

The corresponding acting functional is \[ J\left( y(\cdot ) \right) = \int_{0}^{b} \frac{\sqrt{1 + \dot{y}^2}}{\sqrt{y}}\, {\text d} t , \tag{12.1} \] where \( \displaystyle \quad \dot{y} = {\text d}y/{\text d}t \quad \) is the derivative of function y, which is considered as twice continuously differentiable function, abbreviated as L ∈ ℭ²([0, b] × ℝ¹ × ℝ¹). Also, it is assumed that function y = y(x) in Eq.(12.1) has fixed endpoints: \[ y \left( 0 \right) = 0 , \qquad y \left( \ell \right) = b . \] Then \[ L_y = -\frac{\sqrt{1 + \dot{y}^2}}{2\sqrt{y^3}} , \qquad L_{\dot{y}} = \frac{\dot{y}}{\sqrt{1 + \dot{y}^2}\, \sqrt{y}} . \] With this in hands, we evaluate the derivative \[ \frac{\text d}{{\text d}x}\, L_{\dot{y}} = \frac{\ddot{y}}{\sqrt{\left( 1 + \dot{y}^2 \right)^3}\,\sqrt{y}} - \frac{\dot{y}^2}{1 \sqrt{1 + \dot{y}^2}\,\sqrt{y^3}} . \] Hence, the Euler equation becomes \[ - \frac{\ddot{y}}{\sqrt{\left( 1 + \dot{y}^2 \right)^3}\,\sqrt{y}} + \frac{\dot{y}^2}{1 \sqrt{1 + \dot{y}^2}\,\sqrt{y^3}} - \frac{\sqrt{1 + \dot{y}^2}}{2\sqrt{y^3}} = 0 \] or \[ 2y\,\ddot{y} + \dot{y}^2 + 1 = 0 \] Since this equation does not contain the independent variable, we make substitution \[ \dot{y} = p(x) , \qquad \ddot{y} = p\.\frac{{\text d}p}{{\text d}y} . \] This yields \[ 2yp\,\frac{{\text d}p}{{\text d}y} + p^2 + 1 = 0 \qquad \Longrightarrow \qquad \frac{2p}{p^2 + 1}\,{\text d}p = - \frac{{\text d}y}{y} . \] Integrating both sides, we obtain \[ \ln \left( 1 + p^2 \right) = \ln \frac{C}{|y|} \qquad \Longrightarrow \qquad p = \pm \sqrt{\frac{C}{y} -1} . \] Choosing the plus sign in the latter, we get \[ \frac{{\text d}y}{\sqrt{C/|y| -1}} = {\text d}x . \] Replacing y with Csin²t, we obtain \[ x = \int \frac{2C\,\sin t\,\cos t\,{\text d}t}{\sqrt{1/\sin^2 t -1}} = C\,\int 2\,\sin^2 t\,{\text d}t . \] Therefore, \[ \begin{cases} x &= D + C\,t - C\,\frac{\sin 2t}{2} = \frac{C \left( 2t - \sin 2t \right)}{2} +D , \\ y &= C\,\sin^2 t = \frac{C \left( 1 - \cos 2t \right)}{2} , \end{cases} \] which is a parametric equation of cycloid.

Stricktly speaking, our derivation of extremal curve for the brachistochrone problem is not accurate because of existence of a singular point y = 0. For its proof, see

Young, L., Lectures on the Calculus of Variations and Optimal Theory, Philadelphia, W.B. Saunders Co. 1969. 𝑎    ■

End of Example 12
The sign of the second variation δ²J is related to the stability of solutions of the Euler--Lagrange equations \eqref{EqRiccati.14}. If δ²J > 0 for all deformations, then the second deviation is called positive definite. In this case, we have a minimum; if δ²J < 0, we have a maximum; if δ²J = 0 for some deformations, we have a degenerate extremum. However, to ensure positivity of the second variation, we restrict ourselves to deformations in which first derivatives are also kept fixed to zero. Then
\begin{equation} \label{EqRiccati.15} \delta^2 J = \int_{t_0}^{t_1} \sum_{i,j=1}^n \left[ \frac{\partial^2 L}{\partial q^i \,\partial q^j}\,\eta^i \eta^j + \frac{\partial^2 L}{\partial q^i \,\partial \dot{q}^j}\,\eta^i \dot{\eta}^j + \frac{\partial^2 L}{\partial \dot{q}^i \,\partial q^j}\,\dot{\eta}^i \,\eta^j + \frac{\partial^2 L}{\partial \dot{q}^i \,\partial \dot{q}^j}\,\dot{\eta}^i \dot{\eta}^j + \right] {\text d}t . \end{equation}
This equation can be written in a concise form using inner products:
\[ \delta^2 J \left( \mathbf{x}^{\ast} \right) = \left\langle {\cal H}^{\ast} \left. \begin{pmatrix} \dot{\eta} \\ \eta \end{pmatrix} \right\vert \begin{pmatrix} \dot{\eta} \\ \eta \end{pmatrix} \right\rangle , \]
where 𝐻 is the Hessian matrix evaluated at the minimizer
\[ {\cal H}^{\ast} = \begin{bmatrix} A^{\ast} & B^{\ast} \\ B^{\ast} & C^{\ast} \end{bmatrix} , \qquad A= \left[ \frac{\partial^2 L}{\partial \dot{q}^i \,\partial \dot{q}^j} \right] , \ B = \left[ \frac{\partial^2 L}{\partial q^i \,\partial \dot{q}^j} \right] , \ C = \left[ \frac{\partial^2 L}{\partial q^i \,\partial q^j} \right] . \]
Here ∗ is used to emphasize that it is evaluated at a particular function x solving the Euler--Lagrange equation. Integrating by parts, we get
\[ \int_{t_0}^{t_1} \frac{\partial^2 L}{\partial q^i \,\partial \dot{q}^j}\,\eta^i \dot{\eta}^j \, {\text d}t = \left. \frac{\partial^2 L}{\partial q^i \,\partial \dot{q}^j}\,\eta^i \,\eta^j \right\vert_{t=t_0}^{t_1} - \int_{t_0}^{t_1} \frac{\text d}{{\text d}t}\left( \frac{\partial^2 L}{\partial q^i \,\partial \dot{q}^j}\,\eta^i \,\eta^j \right) {\text d}t . \]
Since test functions vanish at the boundaries, the first term in the right-hand side is zero. Then
\[ \delta^2 J = \int_{t_0}^{t_1} \sum_{i,j=1}^n \left[ \frac{\partial^2 L}{\partial \dot{q}^i \,\partial \dot{q}^j}\,\dot{\eta}^i \,\dot{\eta}^j + \left( \frac{\partial^2 L}{\partial \dot{q}^i \,\partial q^j}\,\dot{\eta}^i - \frac{\text d}{{\text d}t}\,\frac{\partial^2 L}{\partial q^i \,\partial \dot{q}^j}\,\eta^i \right) \eta^j \right] {\text d}t . \]
Taking n = 1 will substantially simplify exposition without loosing any key idea of observation---matrix Riccati equation is a topic of the second course, APMA0340. In the case of a scalar-valued function, the second deviation can be written as
\[ \delta^2 J = \left. \frac{{\text d}^2}{{\text d}\varepsilon^2} \right\vert_{\varepsilon = 0} \int_{t_0}^{t_1} L \left( t, x(t) + \varepsilon y(t), \dot{x} + \varepsilon \dot{y} \right) {\text d}t = \int_{t_0}^{t_1} \left[ p(t)\,\dot{\eta}^2 + q(t)\,\eta^2 \right] {\text d}t , \]
where
\[ p(t) = \frac{\partial^2 L(t, y, \dot{y})}{\partial \dot{y}^2} , \qquad q(t) = \frac{\partial^2 L(t, y, \dot{y})}{\partial y^2} - \frac{\text d}{{\text d}t}\,\frac{\partial^2 L(t, y, \dot{y})}{\partial \dot{y}\, \partial y} , \]
because
\[ \int_{t_0}^{t_1} 2C\,\eta\,\dot{\eta}\,{\text d}t = - \int_{t_0}^{t_1} \dot{C}\,\eta^2 {\text d}t . \]
A German mathematician by the name of Carl Jacobi (1804--1851) made a brilliant observation that for any smooth function w(t),
\[ \int_{t_0}^{t_1} \frac{\text d}{{\text d}t} \left( w\,\eta^2 \right) {\text d}t = \int_{t_0}^{t_1} \left[ w' \,\eta^2 + 2w\eta\,\eta' \right] {\text d}t = 0 , \]
which can be added to the second deviation without changing its value. Then
\[ \delta^2 J = \int_{t_0}^{t_1} \left[ p\,\dot{\eta}^2 + \dot{w} \,\eta^2 + 2w\eta\,\dot{\eta} + q(t)\,\eta^2 \right] {\text d}t , \]
Completing squares, we can rewrite the integrand as
\begin{align*} p\,\dot{\eta}^2 + \dot{w} \,\eta^2 + 2w\eta\,\dot{\eta} + q(t)\,\eta^2 &= p \left( \dot{\eta}^2 + 2\,\frac{w}{p}\,\eta\,\dot{\eta} + \frac{w^2}{p^2} \,\eta^2 \right) + \left( \dot{w} + q - \frac{w^2}{p} \right) \eta^2 \\ &= p \left( \dot{\eta} + \frac{w}{p}\,\eta \right)^2 + \left( \dot{w} + q - \frac{w^2}{p} \right) \eta^2 . \end{align*}
If we can find function w = w(t) so that
\begin{equation} \label{EqRiccati.16} \dot{w} + q - \frac{w^2}{p} = 0 \qquad \mbox{for} \quad \forall t \in \left[ t_0 , t_1 \right] , \end{equation}
then
\[ \delta^2 J \left( y, \eta \right) = \int_{t_0}^{t_1} p(t) \left( \dot{\eta} + \frac{w}{p}\,\eta \right)^2 {\text d} t > 0 \qquad\mbox{for} \quad \eta \in ℭ_0^1 \]
because the strong Legendre condition p(t) > 0 is assumed to be held. The second variation δ²J is called positive definite if δ²J(y, η) > 0 for η ≠ 0 and η ∈ H10.

Unfortunately, the Riccati equation \eqref{EqRiccati.16} may have no solution that is continuous on the whole interval [t₀, t₁]    

Example 13: Let the acting functional be \[ J = \int_0^{\ell} \left[ \dot{y}^2 - y^2 (t) \right] {\text d} t , \qquad y(0) = y(\ell ) = 0 , \] for some fixed positive ℓ. Expanding the scalar function \[ \phi (\varepsilon ) = \int_0^{\ell} \left[ \left( \dot{x}^{\ast} + \varepsilon\,\dot{\eta} \right)^2 - \left( x^{\ast} (t) + \varepsilon\,\eta \right)^2 \right] {\text d} t , \] where x is the minimizer (unknwn yet) of teh functional J, we obtain
Series[(v+eps*de)^2 - (x+eps*et)^2 , {eps,0,2}]
SeriesData[eps, 0, {v^2 - x^2, 2 de v - 2 et x, de^2 - et^2}, 0, 3, 1]
\[ \phi (\varepsilon ) = \phi (0) + \varepsilon\,\phi' (0) + \frac{\varepsilon^2}{2}\, \phi'' (0) , \] where \[ \phi (0) = \dot{\eta}^2 - \eta^2 , \quad \phi' (0) = \delta\,J = 2 \left( \dot{\eta} - \eta \right) , \quad \phi'' (0) = \delta^2 J = 4 \left( \dot{\eta}^2 - \eta^2 \right) \]
phi[eps_] = (v+eps*de)^2 - (x+eps*et)^2 ; phi[0]
v^2 - x^2
D[phi[eps], eps]/.eps->0
2 de v - 2 et x
D[phi[eps], eps, eps]/.eps->0
2 de^2 - 2 et^2
It turns out, with η(0) = η(ℓ) = 0, one always has the Poincaré inequality \[ \int_0^{\ell} \eta^2 {\text d}t \leqslant \frac{\ell^2}{\pi^2} \int_0^{\ell} \dot{\eta}^2 {\text d}t . \] Hence, the second deviation can be estimated from below \[ \delta^2 J = 4 \int_{0}^{\ell} \left( \dot{\eta}^2 - \eta^2 \right) {\text d} t \ge 4 \left( 1 - \frac{\ell^2}{\pi^2} \right) \int_0^{\ell} \dot{\eta}^2 {\text d} t . \] If ℓ ≤ π, the second deviation δ²J ≥ 0 for any test function η. Therefore, the necessary condition for a minimum is met for any extremal x, and J cannot have a local maximum.

If ℓ ≥ π, we can choose a sequence of test functions \( \displaystyle \quad \eta_n (t) = \sin\frac{n\pi t}{\ell} , \quad n = 1, 2, \ldots . \quad \) Then every member of this sequence satisfies the homogeneous boundary considtions. Using double angle formula, we get \[ \delta^2 J \left( \eta_n \right) / 4 = \int_0^{\ell} \left( \frac{n^2 \pi^2}{\ell^2}\,\cos^2 \frac{n\pi t}{\ell} - \sin^2 \frac{n\pi t}{\ell} \right) {\text d} t = \frac{n^2 \pi^2 - \ell^2}{2\,\ell^2} . \] Thus, for n = 1, the second deviation is less than zero, but for any n with n²ℓ² > ℓ², the second deviation is positive. Therefore, any extremal of J cannot be a local minimum or local maximum.

The corresponding Riccati equation becomes \( \displaystyle \quad \dot{w} = 1 + w^2 . \quad \) Separation of variables yields w = tan(t + C). If the length of the closed interval [t₀, t₁] is greater than π, then w(t) has a point of discontinuity on this interval. 𝑎    ■

End of Example 13

Now we turn to another applications of Riccati's equations in Mathematical finance. In particular, we consider the famous Cox--Ingersoll--Ross (CIR) model, which provides one of the best known models of stochastic interest rates.    

Example 14: We first introduce a particular class of stochastic interest-rate models known as affine term structure models. Affine models have certain attractive properties in terms of their tractability. We need a couple of concepts from finance to proceed. The first concept is that of a zero-coupon bond. A zero-coupon bond is a contract which will pay one unit, with certainty, at a prespecified future date. If the current time is t and the future payment will occur at time T > t, then we denote the current price of the zero-coupon bond by P(t, T). The market prices for these bonds can be obtained from traded market instruments. The short rate of interest can be defined in terms of zero-coupon bond prices. Let us first define the function f(t, T) by \[ f(t, T) = - \frac{\partial \log\,P(t, T)}{\partial T} . \] This function f(t, T) is known as the forward rate. The short rate r(t) is defined as the limiting value of f(t, T) as T tends to t: \[ r(t) = \lim_{T\to t} f(t, T) . \] The short rate can be thought of as the limiting interest rate on a bond where we let the time to maturity tend to zero (for detail, see Duffie). The starting point for several popular stochastic interest-rate models is to assume that the short rate dynamics are governed by a stochastic differential equation involving one or more state variables and a number of structural parameters. However, we assume that there is only one such state variable. Such models are known as one factor models. Under a special class of such models, known as the one factor affine class, the resulting model prices for the zero-coupon bonds are exponential-linear functions of the short rate of interest.

Assume the short-term interest rate is described by the following stochastic differential equation \[ {\text d}r = \mu (t, r)\,{\text d}t + \sigma (t, r)\,{\text d}W_t , \] where Wt is a standard Brownian motion under the risk-neutral equivalent measure (for detail, see Duffie 1996). The definition of an affine term structure model is that P(t, T) has the exponential form \[ P(t, T) = \exp \left\{ A(t, T) - B(t, T)\,r(t) \right\} \] and thus the yield \( \displaystyle \quad y(t, T) = - \frac{\log P(t, T)}{T-t} \quad \) is a linear function of r(t). Duffie–Kan’s result is that P(t, T) is exponential-affine if and only if the diffusion µ and the volatility σ have the form \[ \mu (t, r) = \alpha (t)\,r + \beta (t) , \qquad \sigma (t, r) = +\sqrt{\gamma (t)\, r + \delta (t)} . \] Moreover, the coefficients A(t, T), B(t, T) are determined by the following ordinary differential equations \[ \dot{B} (t, T) = \frac{\gamma (t)}{2}\, B(t, T)^2 - \alpha (t)\,B(t, T) - 1, \tag{14.1} \] and \[ \dot{A} (t, T) = \beta (t)\,B(t, T) - \frac{\gamma (t)}{2}\, B(t, T)^2 . \tag{14.2} \] The first equation (14.1) for B(t, T) is the Riccati equation and the second one is solved easily from the first one by integration.

  1. Boyle, P.P., Tian, W., Guan, F., The Riccati Equation in Mathematical Finance, Journal of Symbolic Computation, 2002, volume 33, 343–355; doi:10.1006/jsco.2001.0508
  2. Cox, C., Ingersoll, J., Ross, S. (1985). A theory of the term structure of interest rates. Econometrica, 53, 385–408.
  3. Duffie, D., Pan, J., and Singleton, K., Transform Analysis and Asset Pricing for Affine Jump-Diffusions, Economica, Vol. 68, No. 6 (Nov., 2000), pp. 1343-1376
  4. Heston, S. (1993). A closed-form solution for options with stochastic volatility with applications to bond and currency options, The Review of Financial Studies, Vol. 6, 327–343.
  5. Kovacic, J. (1986). An algorithm for finding second order linear homogeneous differential equations, Journal of Symbolic Computation, Volume 2, 3–43.
  6. Rosenlicht, M. (1973). An analogue of l’Hospitals rule. Proc. AMS, 37, 369–373.
𝑎    ■
End of Example 14

 

  1. Show that if two solutions y₁(x) and y₂(x) of the Riccati equation (1) are known, the general solution is \[ \frac{y - y_1}{y - y_2} = c\,\exp \left\{ \int g(x) \left( y_1 - y_2 \right) {\text d} x \right\} , \]
  2. Show that if three solutions of the Riccati equation are known, say y₁(x), y₂(x), and y₃(x), then the general solution is \[ \frac{\left( y - y_1 \right) \left( y_2 - y_3 \right)}{\left( y - y_2 \right) \left( y_1 - y_3 \right)} = c . \]
  3. Show that if y₁(x), y₂(x), y₃(x), and y₄(x) are any four different solutions of the Riccati equation (1), then \[ \frac{y_1 = y_3}{y_1 - y_2} \cdot \frac{y_4 - y_2}{y_4 - y_3} = \mbox{constant} . \] Hint: First show that u = (y₁ − y₂)−1 is a solution of the linear differential equation u′ - (p − 2gy₁)u + g = 0. Similarly, u₁ = (y₃ − y₂)−1 and u₃ = (y₄ − y₂)−1 are solutions of this differential equation.
  4. Solve the Riccati equation y′ + y² = 1 + x².
  5. Find all solutions of the Riccati equation y′ − y = 2 − y².
  6. Solve the Riccati equation y′ + 2y = xy² + 4(1 − x). Hint: Function y = 2 is a particular solution.
  7. Find all solutions of the Riccati equation y′ − 2y + y² + 2 = 0.
  8. Solve the Riccati equation \( \displaystyle \quad \frac{{\text d}y}{{\text d}x} = \frac{y^2}{x-1} - \frac{x\,y}{x-1} +1 \quad \) by noting that y = 1 and y = x are its solutions.
  9. Solve the Riccati equation x²y′ − 2x y + x²y² + 2 = 0.
  10. Solve the Riccati equation x²y′ − 3x y + y² + 2x² = 0.
  11. Solve the Riccati equation y′ + y² + b/x² = 0 with a constant b.
  12. Solve the Riccati equation xy′ − y + xy² + x³ = 0.
  13. Solve the Riccati equation \( \displaystyle \quad \frac{{\text d}y}{{\text d}x} = \frac{1 + x\,y}{x^3} + y^2 . \)
  14. Solve the Riccati equation y′ = sin(x) + y cos(x) + y². Hint: function y₁(x) = −cos(x) is a particular solution.
  15. Solve the Riccati equation \( \displaystyle \quad \frac{{\text d}y}{{\text d}x} + \frac{2}{x^2} = \frac{2y}{x} + y^2 . \quad \) Hint: function y₁(x) = 2/x is a particular solution.
  16. Solve the Riccati equation \( \displaystyle \quad \frac{{\text d}y}{{\text d}x} = x^2 + \frac{y}{x} - y^2 . \quad \) Hint: function y₁(x) = x is a particular solution.
  17. Solve the Riccati equation \( \displaystyle \quad \frac{{\text d}y}{{\text d}x} + \frac{3}{2 x^3} = \frac{1}{2} - x^2 y + \frac{x}{2} \left( x^3 -1 \right) y^2 . \quad \) Hint: function y₁(x) = 1/x² is a particular solution.
  18. Solve the Riccati equation \( \displaystyle \quad \frac{{\text d}y}{{\text d}x} = \frac{1}{x} - 2 y + \left( x -1 \right) y^2 . \quad \) Hint: function y₁(x) = 1/x is a particular solution.
  19. Solve the Riccati equation \( \displaystyle \quad \frac{{\text d}y}{{\text d}x} = x^2 + \frac{y}{x} - y^2 . \quad \) Hint: function y₁(x) = x² is a particular solution.

 

  1. Abbasi, N.M., Differential Equations Algorithms, 2024.
  2. Akinyemi, L., Erebholo, F., Palamara, V., Oluwasegun, K., A Study of Nonlinear Riccati Equation and Its Applications to Multi-dimensional Nonlinear Evolution Equations, Qualitative Theory of Dynamical Systems, 2024, № S1; https://doi.org/10.1007/s12346-024-01137-2
  3. Bolza, O, Vorlesungen uber Variationsrechnung, Teubner, Leipzig and Berlin, 1909.
  4. Bougoffa, L., New Conditions for Obtaining the Exact Solutions of the General Riccati Equation, ScientificWorldJournal . 2014, Aug 18;2014:401741. doi: 10.1155/2014/401741
  5. Haaheim, D.R. and Stein, F.M., Methods of Solution of the Riccati Differential Equation, Mathematics Magazine, 1969, Vol. 42, No. 5, pp. 233--240; https://doi.org/10.1080/0025570X.1969.11975969
  6. Kamke, E., Differentialgleichungen: Losungsmethoden und Losungen, I, Gewohnliche Differentialgleichungen, B. G. Teubner, Leipzig, 1977.
  7. Liouville, J., Remarke nouvelles sur l'équation de Riccati, Journal de Mathématiques Pures et Appliquées, 1841, vol. 6, p. 1--13.
  8. Murphy, G.M., Ordinary Differential Equations and their Solutions, Van Nostrand, Princeton, 1960.
  9. Polyanin, A. and Zaitsev, V.F., Handbook of exact solutions for ordinary differential equations, Second edition, Chapman and Hall/CRC, 2003.
  10. Ndiaye, M., The Riccati Equation, Differential Transform, Rational Solutions and Applications, Applied Mathematics, Vol. 13, No.9, September 2022; doi: 10.4236/am.2022.139049
  11. Reid, W.T., Riccati Differential Equations, ‎ Academic Press, 2012.
  12. Riccati, Count J. F., Animadversationes in aequationes differentiales secundi gradus, Actorum Eruditorum quae Lipsiue publicantur. Supplementa, 8 (1724), 66-73. Translation of the original Latin into English by Ian Bruce.
  13. Robin, W., A new Riccati equation arising in the theory of classical orthogonal polynomials, International Journal of Mathematical Education in Science and Technology, 2003, Volume 34, 2003 - Issue 1, pp. 31--42. https://doi.org/10.1080/0020739021000018746
  14. Saad, N., Hall, R.L., Ciftci, H., Solutions for certain classes of the Riccati differential equation, Journal of Physics A: Mathematical and Theoretical, Volume 40, Issue 35, pp. 10903-10914 (2007). doi: 10.1088/1751-8113/40/35/012 arXiv: arXiv:0707.2837
  15. Sarafian, H., A Closed Form Solution to a Special Normal Form of Riccati Equation, Advances in Pure Mathematics, 2011, Vol.1 No.5, doi: 10.4236/apm.2011.15053
  16. Sugai, I., Riccati’s nonlinear differential equation, The American Mathematical Monthly, 69 (1960) 134-139.
  17. Watson, G.N., A Treatise on the Theory of Bessel Functions, ‎ Cambridge University Press; 2nd edition, 1995.
  18. Zelikin, M.I., Control Theory and Optimization I: Homogeneous Spaces and the Riccati Equation in the Calculus of Variations (Encyclopaedia of Mathematical Sciences, volume 86), Springer, 2010.
  19. YouTube

 

Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the course APMA0330
Return to the main page for the course APMA0340
Return to Part II of the course APMA0330