Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the first course APMA0330
Return to the main page for the second course APMA0340
Return to Part II of the course APMA0340
Introduction to Linear Algebra with Mathematica
Here dot stands for the derivative with respect to time
variable t. In other words, a fundamental matrix has n linearly
independent columns, each of them is a solution of the homogeneous
vector equation \( \dot{\bf x} (t) = {\bf
P}(t)\,{\bf x}(t) . \) Once a fundamental matrix is
determined, every solution to the system can be written as
\( {\bf x} (t) = {\bf
\Psi}(t)\,{\bf c} , \) for some constant vector c
(written as a column vector of height n). A product of a
fundamental matrix and a nonsingular constant matrix is again a
fundamental matrix. Therefore, a fundamental matrix is not unique.
Theorem 1:
If X(t) is a solution of the n × n
matrix differential equation \( \dot{\bf X} (t) = {\bf
P}(t)\,{\bf X}(t) , \) then for any constant
column-vector c, the n-vector u
= X(t) c is a solution of the vector equation
\( \dot{\bf x} (t) = {\bf
P}(t)\,{\bf x}(t) . \)
Theorem 2:
If an n × n matrix P(t) has
continuous entries on an open interval, then the vector differential
equation \( \dot{\bf x} (t) = {\bf P}(t)\,{\bf
x}(t) \) has an n × n fundamental
matrix \( {\bf X} (t) = \left\{ {\bf x}_1 (t) ,
{\bf x}_2 (t) , \ldots , {\bf x}_n (t) \right\} \) on the
same interval. Every solution x(t) to this system can
be written as a linear combination of the column vectors of the fundamental
matrix in a unique way:
for appropriate constants c1, c2,
... , cn, where \( {\bf c} =
\left\langle c_1 , c_2 , \ldots , c_n \right\rangle^{\mathrm T}
\) is a column vector of these constants.
The above representation of a solutions as a linear combination of
linearly independent function-vectors is referred to as the general
solution to the homogeneous vector differential
equation \( \dot{\bf x} (t) = {\bf P}(t)\,{\bf
x}(t) . \)
Theorem 3:
The general solution of a nonhomogeneous linear vector equation
\( \dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) + {\bf
f} (t) \) is the sum of the general solution of the complement
homogeneous equation
\( \dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t)
\) and a particular solution of the inhomogeneous
equation. That is, every solution to
\( \dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) + {\bf
f} (t) \) is of the form
is the general solution of the homogeneous linear equation
\( \dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t)
\) and xp (t) is a particular
solution of the nonhomogeneosu equation
\( \dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) + {\bf
f} (t) . \)
Theorem:
Superposition Principle for inhomogeneous equations
Let P(t) be an n × n matrix function
that is continuous on an interval [a,b], and
let x1(t) and x2(t)
be two vector solutions of the nonhomogeneous equations
respectively. Then their sum \( {\bf x} (t) = {\bf
x}_1 (t) + {\bf x}_2 (t) \)
Corollary:
The difference between any two solutions of the nonhomogeneous vector
equation \( \dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) + {\bf
f} (t) \) is a solution of the complementary homogeneous
equation
\( \dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) . \)
Example 1:
It is not hard to verify that the vector functions
Therefore, the corresponding fundamental matrix is
\[
{\bf X} (t) = \begin{bmatrix} 1 & t^2 \\ t & t \end{bmatrix}, \qquad
\det {\bf X} (t) = t - t^3 = t \left( 1 - t^2 \right) .
\]
■
The determinant \( W(t) = \det\,{\bf X}(t)
\) of a square matrix \( {\bf X}(t) =
\left[ {\bf x}_1 (t) , {\bf x}_2 (t) , \ldots , {\bf x}_n (t) \right]
\) formed from the set of n vector
functions x1, x2,
... , xn, is called the Wronskian of these
column vectors x1, x2,
... , xn.
Theorem 5:
[N. Abel]
Let P(t) be an n × n matrix function
with entries pij(t) (i,j = 1,2,
... ,n) that are continuous functions on some
interval. Let xk(t), k = 1,2,
... , n, be n solutions to the homogeneous vector
differential equation \( \dot{\bf x} (t) = {\bf
P}(t)\,{\bf x}(t) . \) Then the Wronskian of the set of
vector solutions is
with t0 being a point within an interval where the
trace tr P(t) = p11
+ p22 + ... + pnn is continuous.
■
Corollary 2:
Let x1(t), x2(t),
... , xn(t) be column solutions of the
homogeneous vector equation \( \dot{\bf x} (t) =
{\bf P}(t)\,{\bf x}(t) \) on some interval |a,b|, where
n × n matrix function P(t) is
continuous. Then the corresponding matrix \( {\bf
X}(t) = \left[ {\bf x}_1 (t) , {\bf x}_2 (t) , \ldots , {\bf x}_n
(t) \right] \) of these column vectors is either a singular
matrix for all t ∈ |a,b| or else nonsingular. In other
words, det X(t) is either identically zero or it never
vanishes on the interval |a,b|.
Corollary 3:
Let P(t) be an n × n matrix function
that is continuous on an interval |a,b|. If
\( \left\{ {\bf x}_1 (t) , {\bf x}_2 (t) , \ldots ,
{\bf x}_n (t) \right\} \) is a linearly independent set of
solutions to the homogeneous differential equation \(
\dot{\bf x} (t) = {\bf P}(t)\,{\bf x}(t) \) on |a,b|, then
the Wronskian
The general solution of the homogeneous equation is
\[
{\bf x} (t) = {\bf X} (t)\, {\bf c} ,
\]
where \( {\bf c} = \left\langle c_1 , c_2 , \ldots , c_n
\right\rangle^{\mathrm T} \) is the column vector of arbitrary
constants. To satisfy the initial condition, we set
The square matrix \( {\bf \Phi} (t, s) = {\bf X} (t)\,
{\bf X}^{-1} (s) \) is usually referred to as a
propagator matrix.
Theorem 6:
Let X(t) be a fundamental matrix for the homogeneous linear
system \( \dot{\bf x} = {\bf P}(t)\,{\bf x} (t) , \)
meaning that X(t) is a solution of the matrix equation
\( \dot{\bf X} = {\bf P}(t)\,{\bf X} (t) \) and
det X(t) ≠ 0. Then the unique solution of the initial value
problem
where I is the identity matrix. Hence,
Φ(t, t0) is a fundamental matrix of the
homogeneous vector differential equation
\( \dot{\bf x} = {\bf P}(t)\,{\bf x} (t) . \)
Corollary 5:
Let X(t) and Y(t) be two fundamental matrices of
the homogeneous vector equation
\( \dot{\bf x} = {\bf P}(t)\,{\bf x} (t) . \)
Then there exists a nonsingular constant square matrix C such that
\( {\bf X} (t) = {\bf Y} (t)\, {\bf C} , \ \det{\bf C}
\ne 0 . \) This means that the solution space of the matrix equation
\( \dot{\bf X} = {\bf P}(t)\,{\bf X} (t) \) is 1.
■
Consider autonomous vector linear differential equation of the form
\[
\dot{\bf y} (t) = {\bf A}\, {\bf y} (t) ,
\]
where A is a square n × n matrix and
y(t) is an (n × 1)-column vector of n
unknown functions. Here we use dot to represent the derivative with respect
to t. A solution of the above equation is a curve in n-dimensional space; it is called an integral curve, a trajectory, a streamline, or an orbit. When the independent variable t is associated with time (which is usually the case), we can call a solution
y(t) the state of the system at time t. Since a
constant matrix A is continuous on any interval, all solutions of the
system \( \dot{\bf y} (t) = {\bf A} \, {\bf y} (t)
\) are determined on ( -∞ , ∞ ). Therefore, when
we speak of solutions to the vector equation
\( \dot{\bf y} (t) = {\bf A} \, {\bf y} (t) , \)
we consider solutions on the real axis.
Any fundamental matrix is a constant multiple of the exponential matrix:
Mathematica has a couple of options to determine a fundamental matrix.
It has a build-in command MatrixExp[A t] that determined a fundamental matrix for any square matrix A. For a diagonalizable matrix A, another way to find the fundamental matrix
is to use two lines approach:
Now we check that the exponential matrix is a solution of the matrix diferential equation \eqref{EqExp.1}. We check it in two ways. First we consider Sylvester's formula that leads to
Devi, J.V., Deo, S.G., Khandeparkar, R., Linear Algebra to
Differential Equations, 2021, CRC Press, ISBN 9780815361466
Return to Mathematica page
Return to the main page (APMA0340)
Return to the Part 1 Matrix Algebra
Return to the Part 2 Linear Systems of Ordinary Differential Equations
Return to the Part 3 Non-linear Systems of Ordinary Differential Equations
Return to the Part 4 Numerical Methods
Return to the Part 5 Fourier Series
Return to the Part 6 Partial Differential Equations
Return to the Part 7 Special Functions