This section is devoted to an important class of nonlinear system of equations---gradient systems. The reason why gradient systems are grouped together is that for such systems there is a natural candidate for a Lyapunov function.
Today, more than three centuries after Isaac Newton wrote down his
treatise on the theory of fluxions, and after the rise of the calculus of variations, it has turned out that a large number of evolution models may be expressed in the mathematical
language of ordinary and partial differential equations.
A system of the form
where
\( {\bf x} = \left( x_1 (t), x_2 (t) , \ldots , x_n (t) \right)^{\text T} \in U \subset \mathbb{R}^n , \quad G\,:\, \mathbb{R}^n \to \mathbb{R} \) is a continuously differentiable function defined on an open subset U, and k is a constant, is called a gradient system. (The negative sign in this system is traditional because critical points are the same \( \nabla\cdot G({\bf x}) = - \nabla\cdot G({\bf x}) =0. \) ) Such system is the prototype example of a dissipative evolution equation. Recall that an equilibrium point is called stable if nearby solutions stay nearby for all future time. If additionally, all nearby solutions approach the equilibrium for large time, then the critical point is said to be asymptotically stable.
Suppose that the function G(x) has an isolated local minimum/maximum value at the point x*. Then this point will be a critical point of the given system of differential equations. Its orbits follow the path of steepest descent/increase of G depending on the sign of k.
Let \( U \subset \mathbb{R}^n \) be an open set and let \( G\,:U \to \mathbb{R}^n \) be a continuous function. A function \( E\,: U \to \mathbb{R} \) is called an energy function for the first order vector differential equation
\[
\dot{\bf x} + G({\bf x}) =0,
\]
if for every solution x of this system of equations the composition \( E({\bf x}) = E \circ {\bf x} \) is decreasing.
Very often, an energy function is also called Lyapunov function. In some places of the literature, an energy function is also called
cost function, mainly because of applications to optimization problems.
Differential equations admitting an energy function may be called
dissipative system.
The gradient (denoted by nabla: ∇) is an operator that associates a vector field to a scalar field. Both scalar and vector fields may be naturally represented in Mathematica as pure functions. However, there is no built-in Mathematica function that computes the gradient vector field (however, there is a special symbol \[ EmptyDownTriangle ] for nabla). The command Grad gives the gradient of the input function.
R requires a package to plot the direction field
library(pracma)
##f <- function(x, y) x^2 + y^2 *x - 3*y
f1=expression(x^2 + y^2 *x - 3*y)
z <- D(f1,'y')
z
f <- function(x,y) 2 * x + y^2
xx <- c(-2, 2); yy <- c(-2, 2)
## Not run:
vectorfield(f, xx, yy, scale = 0.1)
for (xs in seq(-2, 2, by = 0.25)) {
sol <- rk4(f, -2, 2, xs, 100)
lines(sol$x, sol$y, col="darkgreen")
}
grid()
Then we apply VectorPlot to defined above potential function.
Since not every vector field is a gradient field of any function, we need a procedure that calculate it.
Finally, here is a subroutine to plot a gradient field based on a (scalar) potential function.
Here is an example of how to use the function. The contour lines of the potential are shown in white, and the streamlines of the gradient field are orange.
Let V be a C¹ function defined on an open neighborhood U1 ⊂ U ⊂ ℝn of the equilibria x* ∈ U1. Let us define the associated function via differential equation
Theorem 1:
The function V is a Liapunov function for the system
\( \dot{\bf x} = - \nabla \cdot V({\bf x}) . \) Moreover, \( \dot{V}({\bf x}) = 0 \) if and only if x is an equilibrium point.
The study of gradient systems \eqref{EqGrad.1} is particularly simple due to the formula
Theorem 2:
An equilibrium point x* ∈ U of the system \eqref{EqGrad.1} is asymptotically stable if and only if x* is an isolated critical point of the function G at which G attains a local minimum.
To understand a gradient flow geometrically we look at thelevel surfacesof the function G : ℝn → ℝ. These are the subsets \( G^{-1} (c) \) with c ∈ ℝ.
Solutions of the gradient vector field cross the level sets of the function G orthogonally except at critical points. Another consequence of Eq.\eqref{EqGrad.4} is that a gradient system cannot have any periodic solutions.
Theorem 3:
Let \( \displaystyle H({\bf x}) = \left[ \frac{\partial^2 G({\bf x})}{\partial x_i \partial x_j} \right] \) be the Hesian matrix for the gradient system \( \displaystyle \dot{\bf x} = - \nabla \cdot G . \)
If the eigenvalues of the Hessian are all strictly positive, then the critical point is asymptotically stable.
If the Hessian has a negative real eigenvalue, then the equilibrium is unstable.
Example: This seems to have
Let
\[
G(x,y) = y^2 (y-1)^2 + 3 x^2
\]
be the function G : ℝ² → ℝ. Then the corresponding gradient system is given by
Calle Ysern, B., Asymptotically stable equilibria of gradient systems, American Mathematical Monthly, 2019, Vol. 126, No. 10, pp. 936--939.
Hirsch, M.W., Smale, S., and Devaney, R.L., Differential Equations, Dynamical Systems, and an Introduction to Chaos, 2003, Second edition, Academic Press.