Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the first course APMA0330
Return to the main page for the second course APMA0340
Return to Part II of the course APMA0340
Introduction to Linear Algebra with Mathematica
In what follows, we use Newton's notation for the derivative with respect to time variable: \( \dot{y} = {\text d}y/{\text d}t . \)
Consider a homogeneous systems of linear differential equations
with constant coefficients:
A solution y(t) to system of ODEs \eqref{EqConstant.1} is a point in the n-dimentional space ℝn for every t; it is called the state of the system at time t. As t varies, the point y(t) moves along a curve in the space ℝn. This solution curve is called a trajectory, or streamline, or orbit, or phase curve, for the system \eqref{EqConstant.1}.
Since a constant matrix A is is obviously continuous on any interval, all solutions to Eq.\eqref{EqConstant.1} are defined on (−∞, ∞). Therefore, when we speak of solutions to the vector differential equation with constant coefficients \eqref{EqConstant.1}, we consider solutions on the whole real axis. Correspondingly, when we discuss behavior of the solution when t → ∞, we mean either of the limits: t → −∞ or t → +∞. Usually, we will specify what limit we consider because variable t is associated with time.
We refer to a constant solution \( {\bf y}(t) = {\bf y}^{\ast} \) of a linear system of equations as an equilibrium if \( {\text d}{\bf y}/{\text d}t = 0. \) Such a constant solution is also called a critical point or stationary point of the system. An equilibrium solution is isolated if there is a neighborhood to the critical point that does not contain any other critical point.
Recall that a neighborhood is any set of points containing the point or subset of interest inside some open set.
For example, a neighborhood containing the origin in one dimension could be [-0.1,1], as it contains the point 0 inside the open symmetric interval (-0.1, 0.1). But [0, 1] is not a neighborhood of the origin as it does not contain any open interval centered at zero.
In a two-dimensional space, a neighborhood of the origi could be any set containing an open circle with radius epsilon (x² + y² < ε²), which is centered about the origin.
If matrix A is invertible (detA ≠ 0), then 0 is the only stationary point of the system \eqref{EqConstant.1}. Otherwise, the system has a subspace of equilibrium solutions spanned on the null space of matrix A.
Along with the initial value problem, it is convenient to consider the matrix differential equation for n×n matrix Φ(t):
because each of its columns is a solution of the vector equation \eqref{EqConstant.1}.
Any nonsingular solution of the matrix differential equation
\( \dot{\bf \Phi} = {\bf A}\,{\bf \Phi}, \quad \det{\bf \Phi}(t) \ne 0, \) is called the fundamental matrix for the vector differential equation \( \dot{\bf y} = {\bf A}\,{\bf y} . \)
.
If n×n matrix A is diagonalizable, then it has n linearly independent eigenvectors x1, x2, …, xn, corresponding to eigenvalues λ1, λ2, …, λn, which are not necessarily distinct. In this case the fundamental matrix can be written explicitly:
placing every eigenvector with the corresponding exponential multiple side by side. Of course, a fundamental matrix is not unique and its determinant, called the Wronskian, is never zero
where I is the identity n-by-n matrix.
We showed previously using Picard's iteration that the matrix initial value problem \( \dot{\bf \Phi} = {\bf A}\,{\bf \Phi}, \quad {\bf \Phi}(0) = {\bf I} \) has a unique solution that is expresented by a convergent power series for any square matrix A:
Theorem 1:
Let A be an n×n matrix with constant entries (real or complex). Then the exponential matrix \eqref{EqConstant.4} is the unique solution of the matrix initial value problem \eqref{EqConstant.3}.
Corollary 1:
For a constant square matrix A, the column vectors of the exponential matrix \( \displaystyle e^{{\bf A}\,t} \) are linearly independent solutions of the vector differential equation
\( \displaystyle \dot{\bf y} + {\bf A}\, {\bf y}(t) . \)
The main interest in this specific fundamental solution is that the solution of the initial value problem
where c is an n column vector of arbitrary constants.
Theorem 2:
Let A be an n×n matrix with constant real entries. The ppropagator \( {\bf \Phi}(t) = e^{{\bf A}t} \) is a fundamental matrixfor the system of linear differential equations \eqref{EqConstant.1}. In other words, the column vectors of the exponential matrix \( e^{{\bf A}t} \) are linearly independent solutions of the vector equation \( \dot{\bf y}(t) = {\bf A}\,{\bf y}(t) . \)
Theorem 3:
Let Y(t) = [y1(t), y2(t), … , yn(t)] be a fundamental matrix for the vector differential equation
\( \dot{\bf y} = {\bf A}\,{\bf y}(t) \) with constant square matrix A. Then
is the general solution (depending on n arbitrary constants c = [c1, … , cn]) of the linear constant coefficient vector differential equation
\( \dot{\bf y} = {\bf A}\,{\bf y}(t). \) Here yk(t), k = 1, 2, … , n, is k-th column vector of the fundamental matrix (in particular, the exponential matrix \( e^{{\bf A}t} \) ). Moreover, the column vector
If y(t) is a solution of a constant coefficient system
\( \dot{\bf y} = {\bf A}\,{\bf y}(t) \) and if t0 is a fixed value of t, then y(t ±t0) is also a solution of Eq.\eqref{EqConstant.1}. However, these solutions determine the same trajectory because the corresponding initial value problem \eqref{EqConstant.6} has a unique solution expressed explicitly through the propagator matrix
\( {\bf y}(t) = {\bf \Phi}(t) {\bf y}_0 = e^{{\bf A}t} {\bf y}_0 . \) So an integral curve of the linear differential equation
\( \dot{\bf y} = {\bf A}\,{\bf y}(t) \) is a trajectory of infinitely many solutions. Therefore, distinct integral curves of Eq.\eqref{EqConstant.1} do not touch each other, which means that the corresponding IVP \eqref{EqConstant.6} has no singular solution.
is a solution of the linear vector equation
\( \dot{\bf y} = {\bf A}\,{\bf y}(t) , \) Any solution of this equation is a linear combination of the vectors:
If we seek a solution of Eq.\eqref{EqConstant.1} in the form
\( {\bf y} (t) = {\bf v}\,e^{\lambda t} , \) then upon its substitution into the homogeneous linear differential equation we obtain
Therefore, λ is an eigenvalue and v must be a corresponding eigenvector. So if an n×n matrix A has m (m ≤ n) distinct eigenvalues λk, k = 1, 2, …, m, then the vector differential equation \eqref{EqConstant.1} has at least m linearly independent exponential solutions
\( {\bf y} (t) = {\bf v}\,e^{\lambda t} \) because eigenvectors vk are linearly independent.
If matrix A is diagonalizable, then we have exactly n linearly independent solutions of the form
\( {\bf y} (t) = {\bf v}\,e^{\lambda t} . \)
Theorem 4:
Suppose that an n×n matrix A has m (m ≤ n) distinct eigenvalues λ1, λ2, … , λm, with corresponding m eigenvectors v1, v2, … , vm. Then the column functions
are linearly independent solutions of the homogeneous vector equation
\( \dot{\bf y} (t) = {\bf A}\,t{\bf y} (t) . \)
Theorem 5:
Suppose that an n×n matrix A is diagonalizable, so it has n linearly independent eigenvectors v1, v2, … , vn that correspond to n eigenvalues (not necessarily distinct) λ1, λ2, … , λn. Then the general solution of the homogeneous vector equation
\( \dot{\bf y} (t) = {\bf A}\,{\bf y} (t) \) is a linear combination of exponential functions
with arbitrary constants c1, c2, … , cn. In order to make the solution from Eq.\eqref{EqConstant.7} a real-valued function, the constants corresponding to complex conjugate eigenvalues \( \overline{\lambda}_k \) must be also complex conjugates \( \overline{c}_k . \) This will keep the number of real arbitrary constants equal to n.
Example 2:
Suppose we need to solve the initial value problem
So far we considered multidimensional systems of ordinary differential
equations solutions that are hard or almost impossible to
visualize. Therefore, we restrict ourselves to examples of planar
(two-dimensional) systems of equations in hopes that they are easier for
us to understand. Additional material about the planar case
can be found in the special section "Planar Phase Portrait."
Consider a systems of linear differential equations \( \dot{\bf y} = {\bf A}\,{\bf y}. \) Its phase portrait is a geometric representation of the trajectories of a dynamical system in the phase plane. A sketch
of a particular solution in the phase plane is called the trajectory (or orbit or streamline) of the solution. Its solutions are plotted as parametric curves
(with t as the parameter) on the Cartesian plane tracing the path of each
particular solution \( {\bf y} = ( y_1 (t) , y_2 (t) ), \ -\infty < t < \infty . \)
Similar to a direction field for a single differential equation, a phase portrait is a graphical tool to visualize
how the solutions of a given system of differential equations would behave in the long run. Each set of initial conditions is represented by a different curve, or point. They consist of a plot of typical trajectories in the state space. This reveals information such as whether the solution is an attractor, a repellor, saddle point, or limit cycle is present for the chosen parameter value.
Example 3:
Consider the
3 x 3 matrix \( {\bf A} = \begin{bmatrix} 3&2&4 \\
2&0&2 \\ 4&2&3 \end{bmatrix} \) that has real eigenvalues.
A Wronskian is a solution of the first order linear differential equation
\( {\text d}W/{\text d}t = \mbox{tr}({\bf A})\,W = 6\,W \) because the trace of matrix A is 6.
Since the Wronskian of the exponential matrix function is e6t ≠ 0, this exponential matrix-function is not singular:
Det[MatrixExp[A t]]
E^(6 t)
We check that the exponential matrix-function \( e^{{\bf A}\,t} \) is a solution of the matrix
differential equation: \( \frac{\text d}{{\text
d}t} \,{\bf \Phi} (t) = {\bf A}\, {\bf \Phi} (t) . \)
Indeed,
Next, we show that the above matrix function satisfies also the initial
conditions \( {\bf \Phi} (0) = {\bf I} , \)
where I is the identity matrix.
Print[MatrixExp[A 0]]
Out[8]= {{1,0,0},{0,1,0},{0,0,1}}
Note that instead of Dt, we can use the partial
derivative operator: D[function,t]
The general solution of the vector equation \(
\dot{\bf y} (t) = {\bf A}\, {\bf y} \) is
\( {\bf y}(t) = e^{{\bf A}\,t} \,{\bf c} ,
\) where c is a vector of arbitrary constants. We
find its solution with Mathematica:
CC := {c1, c2, c3} (* vector of arbitrary constants *)
(* note that the upper case letter C is prohibited to use *)
MatrixExp[A t].CC
Example 4:
Here is a system of two ODEs whose coefficient matrix has real and
distinct positive eigenvalues. In this case, the origin, which is the only
critical point, is called a node. We write this system in vector form:
\[
\dot{\bf v} = {\bf A}\,{\bf v} ,
\]
where \( {\bf v} = \{ x,y\} \) is a
2-column vector, and A is a 2x2 matrix:
\( {\bf A} = \begin{bmatrix} 1&-1 \\ 3&\phantom{-}5
\end{bmatrix} . \)
A = {{1, -1}, {3, 5}}
Out[1]= {{1, -1}, {3, 5}}
Eigenvalues[A]
Out[2]= {4, 2}
The matrix A has two real positive distinct
eigenvalues (λ = 4 and λ = 2). Therefore, the critical
point is an unstable nodal source.
Our objective is to visualize some of the particular solutions in
order to see how the family of general solutions behaves. To do this, we
must first put the equation into Mathematica using the
command
MapThread.
Here we use the command MapThread that applies the
operator or function to each component. In our case, MapThread
applies a symbolic equation to the left-hand side, which is the
derivative of a vector, and to the right hand side, which is a
matrix times a vector. Then we find the general solution using the
standard Mathematica command DSolve:
As we see, the general solution depends on two arbitrary constants
C[1] and C[2] . We set some numerical values to these arbitrary
constants to obtain a family of particular solutions, and then plot
them
There is another way to visualize the general solution using the initial conditions. To find the general solution, depending on the initial conditions
x[0]=a and y[b]=b, we type in Mathematica
Out[1]= {{ x-> Function[{t}, - (1/2) Exp[2 t] (-3 a - b + a E^(2 t) + b E^(2 t)) ],
y-> Function[{t}, (1/2) E^(2 t) (-3 a - b + 3 a E^(2 t) + 3 b E^(2 t))]}}
We can plot the solution along with the direction field:
Show[sp, curves]
The direction field shows that the origin is unstable.
This suggests that the solutions all run off toward infinity, no matter where they start, although some solutions turn
around before taking their final run to infinity.
We want to show this instability by constructing a finite number of
solutions, which we can obtain from the general solution by specifying
the initial conditions. We choose, for instance, 8 points, as initial
conditions that the solution plot will go through:
For constant coefficient differential equations, solutions exist for
all t from minus infinity to infinity. However, when we use initial
conditions at our specified 8 points, we encounter the problem that
our solutions diverge to infinity. Obviously, our plotting window
cannot be expanded infinitely to show all solutions. As t goes to
negative infinity, our solution approaches the origin, which means we
need to incorporate negative time in our plotting. We use t = -6 for simplicity. We take the coordinates in the above list to be initial positions at t = 0, and then we integrate
over a time interval {-6, 0}. NDSolve will begin the integration at t = 0, and then go back in time to t = -6, thus tracing the
solution back in time. Let's try this for one solution before generating all 8. First we define the equation as a function of the initial values.
We see that this solution starts very near the origin at t = -6, and has progressed to the point {5,1} when t has increased to 0.
Now we construct graphs of all of the other solutions with the given
initial conditions in initlist2 -- 8 solutions in all based on our
8 initial conditions.
We first define a routine that calculates and then graphs a solution for a given set of initial conditions.
Upon typing graph2[5, 1], we get exactly the same
figure using different code. Our routine works, but when we apply it to get our 8
solutions, we are going to get individual graphs of each. Instead, we
want to create a composite graph of all 8 solutions. We can do this by
telling our basic graph routine NOT to display the graph it
creates. This is accomplished using the option DisplayFunction
-> Identity.
Our modification has been successful, so we can now use
the Show command
with the option DisplayFunction->$DisplayFunction, which tells Mathematica to show the graph.
Now we are going to use a Do loop to create all 8 graphs, without showing them individually. At the end we will
show them all together, and after that, we will combine them with the direction field. We collect the graphs in a list named
graphlist2.
We now consider the system \(
\dot{\bf y} (t) = {\bf A}\, {\bf y} , \) where square matrix A has a complex eigenvalue λ = α + jβ with β ≠ 0. As usual, we assume that A has real entries, so the characteristic polynomial det(λI −A) of matrix A has real coefficients. This implies that its complex conjugate α − jβ is also N eigenvalue of A.
An eigenvector x of A associated with λ = α + jβ has complex entries,
\[
{\bf x} = {\bf u} + {\bf j}\,{\bf v} ,
\]
where u and v have real entries; so u and v are the real and imaginary parts of x. Since A x = λx, we have
which shows that u − jv is an eigenvector associated with \( \overline{\lambda} = \alpha - {\bf j}\beta . \)
Theorem 1:
Let A be an n × n matrix with real entries. Let λ = α + jβ (β ≠ 0) be a complex eigenvalue of A and let x = u + j v be an associated eigenvector, where u and v have real components. Then u and v are both nonzero and
\[
{\bf y}_1 = e^{\alpha\,t} \left( {\bf u} \,\cos \beta t - {\bf v}\,\sin \beta t \right) \qquad \mbox{and} \qquad {\bf y}_2 = e^{\alpha\,t} \left( {\bf u} , \,\sin \beta t + {\bf v}\,\cos \beta t \right)
\]
which are real and imaginary parts of
\[
e^{\alpha\,t} \left( \cos \beta t + {\bf j}\,\sin \beta t \right) \left( {\bf u} + {\bf j}\,{\bf v}\right)
\]
are linearly independent solutions of \(
\dot{\bf y} (t) = {\bf A}\, {\bf y} . \)
The spiraling behavior is typical for systems with complex eigenvalues.
Example 6:
Consider a linear system of ordinary differential equations
\[
\dot{x} = x-y , \qquad \dot{y} = 5\,x -y.
\]
The corresponding matrix \( \begin{bmatrix} 1 & -2
\\ 5 & -1 \end{bmatrix} \) has two pure imaginary
eigenvalues \( \lambda = \pm 2{\bf j} ,
\) where j is the unit vector in the positive vertical
direction on the complex plane.
Using the standard Mathematica command StreamPlot,
we visualize a phase portrait for the given system
As we see, trajectories circle around the origin in a counterclockwise
direction. If we use the negative of matrix A, trajectories will
circle the origin in clockwise direction.
If matrix A is invertible, then the only critical point of the homogeneous linear differential equation dy/dt = Ay is the origin. However, when matrix A is singular, then Eq.\eqref{EqConstant.1} has infinite many critical non-isolated points because the algebraic equation Ay = 0 defines non-trivial null-space of matrix A. We illustrate this situation with the following examples.
that are used to solve the system of differential equations \eqref{EqConstant.1}.
Example 7A:
Matrix A has two eigenvalues λ = 0 and λ = −5.
A = {{1, -3}, {2, -6}}
Eigenvalues[A]
{-5, 0}
Next, we determine its eigenvectors with Mathematica:
A = {{1, -3}, {2, -6}}
Eigenvectors[A]
{{1, 2}, {3, 1}}
So we know that the eigenspace corresponding to the eigenvalue λ = 0 is spanned on the vector [3, 1]T. Every vector from this eigenspace is a critical point for the system of equations
and c is a vector of two arbitrary real constants. First, we plot some solutions. First, we make a modul for plotting solutions of a two-dimensional system of equations \eqref{EqConstant.1}.
Example 8:
We consider three nilpotent matrices (a square matrix is called nilpotent if its some power is the zero matrix; in two dimensional case, this means that the matrix must have both, trace and determinant, to be zeroes)
We know that matrix T is a defective matrix with only one linearly independent eigenvector [2, 1]T corresponding to double eigenvalue λ = 0.
Next, we plot the direction field for system (7.1) along with the separatrix (in red), which is the null-space of matrix T.
 
We plot the direction field corresponding to matrix C.
Return to Mathematica page
Return to the main page (APMA0340)
Return to the Part 1 Matrix Algebra
Return to the Part 2 Linear Systems of Ordinary Differential Equations
Return to the Part 3 Non-linear Systems of Ordinary Differential Equations
Return to the Part 4 Numerical Methods
Return to the Part 5 Fourier Series
Return to the Part 6 Partial Differential Equations
Return to the Part 7 Special Functions