This section serves a preparatory role for the next section---roots (mostly square). Inspired by our four definitions of matrix functions (diagonalization, Sylvester's formula, the resolvent method, and polynomial interpolation) that utilize mostly eigenvalues, we introduce a wide class of positive definite matrices that includes standard definitions used in mathematics.
Many applications of matrix theory require definitions of functions
involving (square) roots; therefore, we need to extract from wide range of
arbitrary
square matrices a subdomain for which such functions
could be defined. Since we deal with matrix functions based on its
eigenvalues, it is natural to deal with matrices that have positive
eigenvalues. So we are going to introduce the following definition,
which differs from similar definition used in other areas of
applications.
A square matrix A is
called operator positive/negative if all its eigenvalues are positive/negative. A square matrix A is
called operator nonnegative if all its eigenvalues are nonnegative.
Our definition of operator positivity is related to the general definition of differential operators to be positive when all its eigenvalues are positive. It will be natural to call operator positive matrices just positive ones, but this name was taken by matrices with positive entries.
If an operator positive matrix A is symmetric or self-adjoint, then it is referred to as a positive definite matrix because the inner product 〈 u , A u 〉 > 0 for any nonzero vector u ∈ ℝn . This inner product can be used to define a norm associated with matrix A :
\[
\| {\bf x} \|^2_A = \left\langle {\bf x}, {\bf A\,x} \right\rangle = \sum_{i=1}^n \overline{x}_i \sum_{j=1}^n a_{i,j} x_j , \qquad {\bf x} \in \mathbb{R}^n .
\]
A standard definition
of positive
definite matrix requires that
the Hermitian
part of matrix A
\[
{\bf A}_H = \frac{1}{2} \left( {\bf A} + {\bf A}^{\ast} \right) ,
\qquad {\bf A}^{\ast} = \overline{\bf A}^{\mathrm T} ,
\]
has positive eigenvalues. It is necessary and sufficient that the real part
\[
\Re \left[ {\bf x}^{\ast} {\bf A}\,{\bf x} \right] >0 \qquad \mbox{for
all nonzero complex vectors } {\bf x} \in \mathbb{C}^n .
\]
In case of real matrix
A , it is equivalent to
\[
{\bf x}^{\mathrm T} {\bf A}\,{\bf x} >0 \qquad \mbox{for
all nonzero real vectors } {\bf x} \in \mathbb{R}^n
\]
because its symmetric part
\( {\bf A}_S =
\frac{1}{2} \left( {\bf A} + {\bf A}^{\mathrm T} \right) \)
is a
positive
definite matrix.
A square matrix A is called positive/negative if
all its entries are positive/negative numbers.
Positive matrices
are used in probability, in particular, in
Markov chains . If
A is a positive matrix then
-A is negative matrix.
Mathematica has a dedicated command to check whether the given matrix
is positive definite (in traditional sense) or not:
PositiveDefiniteQ[a = {{1, -3/2}, {0, 1}}]
True
Self-adjoint matrices can be checked for positiveness with the following script:
PositiveDefiniteHermitianQ[m_] :=
Module[{f = Flatten[m], n = Length[m]},
And @@ {And @@ Positive /@ Table[m[[i, i]], {i, n}],
And @@
Flatten@Table[
i == j || m[[i, i]] m[[j, j]] > Abs[m[[i, j]]]^2, {i, n}, {j,
n}],
MemberQ[Length /@ Union /@ Position[m, Max[Abs[f]]], 1],
Det[m] > 0
}
]
Then using a sequence of self-adjoint matrices
HermitianQ /@ (l = { {{2,-I},{I,1}}, {{0,1}, {1,2}}, {{1,0},{0,-2}} })
MatrixForm /@ Select[l, PositiveDefiniteHermitianQ]
{{2,-I},{I,1}}
Example: Positive
definite 2×2 matrix
Example:
Let us start with the following 2×2 matrix:
\[
{\bf A} = \begin{bmatrix} 13&-6 \\ -102&72
\end{bmatrix}.
\]
First, we check its eigenvalues with
Mathematica :
A = {{13, -6}, {-102, 72}};
Eigenvalues[A]
{81, 4}
Since its eigenvalues, 81 and 4, are positive integers, the given
matrix is positive definite. Its symmetric part
\[
{\bf A}_S = \frac{1}{2} \left( {\bf A} + {\bf A}^{\mathrm T} \right) =
\begin{bmatrix} 13&-54 \\ -54&72
\end{bmatrix}
\]
has eigenvalues
\[
\lambda_1 = \frac{1}{2} \left( 85 + \sqrt{15145} \right) \approx
104.033 \qquad \mbox{and} \qquad \lambda_2 = \frac{1}{2} \left( 85 -
\sqrt{15145} \right) \approx -19.0325 .
\]
Therefore, the condition
\[
{\bf x}^{\mathrm T} {\bf A}\,{\bf x} >0
\]
is violated; at least for vector
x = [1, 1] because
\( [1, 1]^{\mathrm T} {\bf A}\,[1, 1] = -23
. \)
Next, we build some functions of the given matrix starting with
square roots. Since matrix A has two distinct (real)
eigenvalues, it is diagonalizable and Sylvester's method is
appropriate it this case. So we construct the resolvent
\( {\bf R}_{\lambda} ({\bf A}) = \left( \lambda
{\bf I} - {\bf A} \right)^{-1} \)
A = {{13, -6}, {-102, 72}};
Resolvent = Factor[Inverse[\[Lambda]*IdentityMatrix[2] - A]];
\[
{\bf R}_{\lambda} ({\bf A}) = \left( \lambda
{\bf I} - {\bf A} \right)^{-1} = \frac{1}{(\lambda -81)(\lambda -4)}
\begin{bmatrix} \lambda -72&-6 \\ -102&\lambda -13
\end{bmatrix}.
\]
and Sylvester's auxiliary matrices:
z4=Factor[(\[Lambda] - 4)*Resolvent] /. \[Lambda] -> 4;
z81=Factor[(\[Lambda] - 81)*Resolvent] /. \[Lambda] -> 81;
\[
{\bf Z}_4 = \frac{{\bf A} - 81\,{\bf I}}{4 - 81} = \frac{1}{77}
\begin{bmatrix} 68&6 \\ 102&68 \end{bmatrix} , \qquad
{\bf Z}_{81} = \frac{{\bf A} - 4\,{\bf I}}{81-4} = \frac{1}{77}
\begin{bmatrix} 9&-6 \\ -102& 68 \end{bmatrix} .
\]
These two auxiliary matrices are projectors:
z4.z4 - z4
{{0, 0}, {0, 0}}
z81.z81 - z81
{{0, 0}, {0, 0}}
z4.z81
{{0, 0}, {0, 0}}
z4+z81
{{1, 0}, {0, 1}}
Then we find four roots using the formulas:
r1= 2*z4+9*z81
Out[6]= {{31/11, -(6/11)}, {-(102/11), 90/11}}
r2= 2*z4-9*z81
Out[7]= {{5/7, 6/7}, {102/7, -(54/7)}}
r3= -2*z4+9*z81
Out[8]= {{-(5/7), -(6/7)}, {-(102/7), 54/7}}
r4= -2*z4-9*z81
Out[8]= {{-(31/11), 6/11}, {102/11, -(90/11)}}
We check the answers with standard Mathematica command:
MatrixPower[A, 1/2]
Out[9]= {{31/11, -(6/11)}, {-(102/11), 90/11}}
which is just
root r1 . So Mathematica does not
provide other square roots, but just one of them.
We construct two functions of the matrix A :
\[
{\bf \Phi}(t) = \frac{\sin \left( t\,\sqrt{\bf A} \right)}{\sqrt{\bf
A}} , \qquad\mbox{and}\qquad {\bf \Psi} (t) = \cos \left( t\,\sqrt{\bf
A} \right) .
\]
phi[t_]= (Sin[2*t]/2)*z4 + (Sin[9*t]/9)*z81
Out[10]=
{{34/77 Sin[2 t] + 1/77 Sin[9 t],
3/77 Sin[2 t] - 2/231 Sin[9 t]}, {51/77 Sin[2 t] - 34/231 Sin[9 t],
9/154 Sin[2 t] + 68/693 Sin[9 t]}}
psi[t_]= Cos[2*t]*z4 + Cos[9*t]*z81
Out[11]=
{{68/77 Cos[2 t] + 9/77 Cos[9 t],
6/77 Cos[2 t] - 6/77 Cos[9 t]}, {102/77 Cos[2 t] - 102/77 Cos[9 t],
9/77 Cos[2 t] + 68/77 Cos[9 t]}}
Finally, we show that these two matrix-functions,
Φ (t ) and Ψ (t )
are solutions to the following initial value problems for the second order matrix differential equation
\[
\ddot{\bf \Phi}(t) + {\bf A} \,{\bf \Phi}(t) = {\bf 0} , \quad {\bf
\Phi}(0) = {\bf 0} , \ \dot{\bf \Phi}(0) = {\bf I} ; \qquad
\ddot{\bf \Psi}(t) + {\bf A} \,{\bf \Psi}(t) = {\bf 0} , \quad {\bf
\Psi}(0) = {\bf I} , \ \dot{\bf \Psi}(0) = {\bf 0} .
\]
D[phi[t], t] /. t -> 0
{{1, 0}, {0, 1}}
phi[0]
{{0, 0}, {0, 0}}
Simplify[D[phi[t], {t, 2}] + A.phi[t]]
{{0, 0}, {0, 0}}
Similar for another matrix-function.
■
Example: Positive definite
3×3 matrix
Example:
For a positive definite matrix
\( {\bf A} = \begin{bmatrix}
7&0&-4 \\ -2&4&5 \\ 1&0&2 \end{bmatrix}, \)
show that the inner product
\( \left( {\bf A}\,
{\bf x} , {\bf x} \right) \) is positive for any
vector
x .
First, we check that all eigenvalues of the given matrix are positive:
A = {{7, 0, -4}, {-2, 4, 5}, {1, 0, 2}};
Eigenvalues[A]
Out[2]= {6, 4, 3}
Its symmetric part
\[
{\bf A}_S = \frac{1}{2} \left( {\bf A} + {\bf A}^{\mathrm T} \right) =
\begin{bmatrix} 7&-1&-3/2 \\ -1&4&5/2 \\
-3/2&5/2& 2
\end{bmatrix}
\]
has three positive eigenvalues (which we check
with
Mathematica :
N[Eigenvalues[(A + Transpose[A])/2]]
{8.17922, 4.58759, 0.233191}
Then we define inner product of
A with arbitrary vector:
x = {x1, x2, x3};
Ax = A.x
Out[4]= {7 x1 - 4 x3, -2 x1 + 4 x2 + 5 x3, x1 + 2 x3}
z = Simplify[Ax.x]
Out[5]= 7 x1^2 - 2 x1 x2 + 4 x2^2 - 3 x1 x3 + 5 x2 x3 + 2 x3^2
We want to show that this expression is a sum of squares:
\( \left( a\,x_1 + d\,x_2 \right)^2 + \left( e\,x_1
+ f\,x_2 - g\, x_3 \right)^2 , \) for some scalars
a ,
d ,
e ,
f , and
g . To find the numerical
values of these scalars, we have to solve the equation:
\[
{\bf A}\,{\bf x}.{\bf x} = \left( a\,x_1 + d\,x_2 \right)^2 + \left( e\,x_1
+ f\,x_2 - g\, x_3 \right)^2 .
\]
zz = Factor[(a*x1 + d*x2)^2 + (e*x1 + f*x2 - g*x3)^2]
Out[6]=
a^2 x1^2 + e^2 x1^2 + 2 a d x1 x2 + 2 e f x1 x2 + d^2 x2^2 +
f^2 x2^2 - 2 e g x1 x3 - 2 f g
x2 x3 + g^2 x3^2
eqn = Simplify[{z == zz}]
Out[7]=
{(-7 + a^2 + e^2) x1^2 + (-4 + d^2 + f^2) x2^2 + (-2 + g^2) x3^2 +
x1 (2 (1 + a d + e f) x2 + (3 - 2 e g) x3) == (5 + 2 f g) x2 x3}
To make this expression positive, we need to eliminate all terms
containing products
x 1 x 2 ,
x 1 x 3 , and
x 2 x 3 . Therefore, we set
a == (-1 - e*f)/d
d == Sqrt[4 - f^2]
e == 3/(2*g)
f == 5/(-2*g)
g == Sqrt[(2)]
Therefore, we conclude that we have the following equality for matrix
A :
\[
\left( {\bf A}\,{\bf x} , {\bf x} \right) = 5\,x_1^2 + \frac{7}{8}
\left( x_1 + x_2 \right)^2 + \frac{1}{8} \left( 3\,x_1
- 5\,x_2 - 4\, x_3 \right)^2 , %\qquad \blacksquare
\]
which we verify with
Mathematica :
right = 5*x1^2 + (7/8)*(x1 + x2)^2 + (3*x1 - 5*x2 - 4*x3)^2/8;
Simplify[z - right]
0
■
Example: Positive definite
3×3 matrix
Example:
Consider the positive definite matrix
\[
{\bf A} = \begin{bmatrix} 1&4&16 \\ 18& 20& 4 \\ -12& -14& -7 \end{bmatrix}
\]
which has three positive eigenvalues
\( \lambda_1 =1, \
\lambda_2 =4, \quad\mbox{and}\quad \lambda_3 = 9. \) Indeed,
A={{1, 4, 16}, {18, 20, 4}, {-12, -14, -7}};
Eigenvalues[A]
Out[2]= {9, 4, 1}
S = Eigenvectors[A]
Out[3]= {{1, -2, 1}, {4, -5, 2}, {4, -4, 1}}
S = Transpose[S]
Out[4]= {{1, 4, 4}, {-2, -5, -4}, {1, 2, 1}}
S // MatrixForm
\[ \begin{pmatrix} 1&4&4 \\ -2&-5&-4 \\ 1&2&1 \end{pmatrix} \]
Det[A]
Out[6]= 36
Recall that if we need to repeat a particular command, you don't need to
retype it, just indicate the number; say you want to repeat third command:
%3
Out[7]= {{1, -2, 1}, {4, -5, 2}, {4, -4, 1}}
We are going to find square roots of this matrix using three
different techniques: diagonalization, Sylvester's method (which
coincides with the resolvent method in this case), and the
polynomial interpolation method.
We start with the diagonalization procedure first. To begin, we need to
define diagonal matrices, one with eigenvalues and another one with a constant
parameter λ on its diagonal. Therefore, we type in
ii = DiagonalMatrix[{1, 1, 1}]
L= DiagonalMatrix[{\[Lambda], \[Lambda], \[Lambda]}]
L = {{\[Lambda], 0, 0}, {0, \[Lambda], 0}, {0, 0, \[Lambda]}}
L = \[Lambda] * IdentityMatrix[3]
Out[2]= {{\[Lambda], 0, 0}, {0, \[Lambda], 0}, {0, 0, \[Lambda]}}
MatrixForm[L]
\[ \begin{pmatrix} \lambda&0&0 \\ 0&\lambda&0 \\ 0&0&\lambda \end{pmatrix} \]
Eigenvectors[A]
Out= {{1, -2, 1}, {4, -5, 2}, {4, -4, 1}}
Dd= {{9, 0, 0}, {0, 4, 0}, {0, 0, 1}}
Out= {{9, 0, 0}, {0, 4, 0}, {0, 0, 1}}
S = Transpose[Eigenvectors[A]]
S // MatrixForm
\[ \begin{pmatrix} 1&4&1 \\ -2&-5&2 \\ 1&2&1 \end{pmatrix}
\]
The roots can be obtained from the formula, where the plus/minus are
independent, so total will be 8 roots.
roots = S.DiagonalMatrix[{PlusMinus[Sqrt[Eigenvalues[A][[1]]]], PlusMinus[Sqrt[Eigenvalues[A][[2]]]], PlusMinus[Sqrt[Eigenvalues[A][[3]]]]}].Inverse[S]
Out[20]= {{-4 (\[PlusMinus]1) + 8 (\[PlusMinus]2) - 3 (\[PlusMinus]3), -8 (\[PlusMinus]1) + 12 (\[PlusMinus]2) - 4 (\[PlusMinus]3), -12 (\[PlusMinus]1) + 16 (\[PlusMinus]2) - 4 (\[PlusMinus]3)}, {4 (\[PlusMinus]1) - 10 (\[PlusMinus]2) + 6 (\[PlusMinus]3), 8 (\[PlusMinus]1) - 15 (\[PlusMinus]2) + 8 (\[PlusMinus]3), 12 (\[PlusMinus]1) - 20 (\[PlusMinus]2) + 8 (\[PlusMinus]3)}, {-\[PlusMinus]1 + 4 (\[PlusMinus]2) - 3 (\[PlusMinus]3), -2 (\[PlusMinus]1) + 6 (\[PlusMinus]2) - 4 (\[PlusMinus]3), -3 (\[PlusMinus]1) + 8 (\[PlusMinus]2) - 4 (\[PlusMinus]3)}}
Now we can find all roots, one at a time:
root1 = S.DiagonalMatrix[{Sqrt[Eigenvalues[A][[1]]], Sqrt[Eigenvalues[A][[2]]], Sqrt[Eigenvalues[A][[3]]]}].Inverse[S]
Out[21]= {{3, 4, 8}, {2, 2, -4}, {-2, -2, 1}}
root2 = S.DiagonalMatrix[{-Sqrt[Eigenvalues[A][[1]]], Sqrt[Eigenvalues[A][[2]]], Sqrt[Eigenvalues[A][[3]]]}].Inverse[S]
Out[22]= {{21, 28, 32}, {-34, -46, -52}, {16, 22, 25}}
root3 = S.DiagonalMatrix[{-Sqrt[Eigenvalues[A][[1]]], -Sqrt[ Eigenvalues[A][[2]]], Sqrt[Eigenvalues[A][[3]]]}].Inverse[S]
Out[23]= {{-11, -20, -32}, {6, 14, 28}, {0, -2, -7}}
root4 = S.DiagonalMatrix[{-Sqrt[Eigenvalues[A][[1]]], Sqrt[Eigenvalues[A][[2]]], -Sqrt[Eigenvalues[A][[3]]]}].Inverse[S]
Out[24]= {{29, 44, 56}, {-42, -62, -76}, {18, 26, 31}}
To check the answer, we calculate the product
root3.root3
Out[25]= {{1, 4, 16}, {18, 20, 4}, {-12, -14, -7}}
Now we calculate the exponential matrix \( {\bf U} (t) = e^{{\bf A}\,t} , \) which we denote by U[t] in Mathematica notebook.
expA = {{Exp[9*t], 0, 0}, {0, Exp[4*t], 0}, {0, 0, Exp[t]}}
U[t_] = S.expA.Inverse[S]
Out= {{-4 E^t + 8 E^(4 t) - 3 E^(9 t), -8 E^t + 12 E^(4 t) - 4 E^(9 t), -12 E^t + 16 E^(4 t) - 4 E^(9 t)}, {4 E^t - 10 E^(4 t) + 6 E^(9 t), 8 E^t - 15 E^(4 t) + 8 E^(9 t), 12 E^t - 20 E^(4 t) + 8 E^(9 t)}, {-E^t + 4 E^(4 t) - 3 E^(9 t), -2 E^t + 6 E^(4 t) - 4 E^(9 t), -3 E^t + 8 E^(4 t) - 4 E^(9 t)}}
Next, we show that this exponential matrix satisfies the first order matrix
differential equation
\( \dot{\bf U} (t) =
{\bf A}\,{\bf U} (t) . \)
D[U[t], t]
Out= {{-4 E^t + 32 E^(4 t) - 27 E^(9 t), -8 E^t + 48 E^(4 t) - 36 E^(9 t), -12 E^t + 64 E^(4 t) - 36 E^(9 t)}, {4 E^t - 40 E^(4 t) + 54 E^(9 t), 8 E^t - 60 E^(4 t) + 72 E^(9 t), 12 E^t - 80 E^(4 t) + 72 E^(9 t)}, {-E^t + 16 E^(4 t) - 27 E^(9 t), -2 E^t + 24 E^(4 t) - 36 E^(9 t), -3 E^t + 32 E^(4 t) - 36 E^(9 t)}}
Simplify[D[U[t], t] - A.U[t]]
Out= {{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
Next, we use the resolvent method, which is the same as Sylvester's method.
First, we calculate the resolvent
R1[\[Lambda]_] = Simplify[Inverse[L - A]]
Out= {{(-84 - 13 \[Lambda] + \[Lambda]^2)/(-36 + 49 \[Lambda] - 14 \[Lambda]^2 + \[Lambda]^3), ( 4 (-49 + \[Lambda]))/(-36 + 49 \[Lambda] - 14 \[Lambda]^2 + \[Lambda]^3), ( 16 (-19 + \[Lambda]))/(-36 + 49 \[Lambda] - 14 \[Lambda]^2 + \[Lambda]^3)}, {( 6 (13 + 3 \[Lambda]))/(-36 + 49 \[Lambda] - 14 \[Lambda]^2 + \[Lambda]^3), ( 185 + 6 \[Lambda] + \[Lambda]^2)/(-36 + 49 \[Lambda] - 14 \[Lambda]^2 + \[Lambda]^3), ( 4 (71 + \[Lambda]))/(-36 + 49 \[Lambda] - 14 \[Lambda]^2 + \[Lambda]^3)}, {-(( 12 (1 + \[Lambda]))/(-36 + 49 \[Lambda] - 14 \[Lambda]^2 + \[Lambda]^3)), -(( 2 (17 + 7 \[Lambda]))/(-36 + 49 \[Lambda] - 14 \[Lambda]^2 + \[Lambda]^3)), (-52 - 21 \[Lambda] + \[Lambda]^2)/(-36 + 49 \[Lambda] - 14 \[Lambda]^2 + \[Lambda]^3)}}
To find the numerator, we type:
P[lambda_] = -Simplify[R1[lambda]*CharacteristicPolynomial[A, lambda]]
Out[10]= {{-84 - 13 lambda + lambda^2, 4 (-49 + lambda), 16 (-19 + lambda)}, {6 (13 + 3 lambda), 185 + 6 lambda + lambda^2, 4 (71 + lambda)}, {-12 (1 + lambda), -34 - 14 lambda, -52 - 21 lambda + lambda^2}}
■
Example 1.6.2: Consider the positive matrix with distinct eigenvalues
\[ {\bf B} = \begin{bmatrix} -75& -45& 107 \\ 252& 154& -351\\ 48& 30& -65 \end{bmatrix} \]
First, we find its eigenvalues
B = {{-75, -45, 107}, {252, 154, -351}, {48, 30, -65}}
Eigenvalues[B]
Out[2]= {9, 4, 1}
We use the diagonalization procedure to determine its square roots.
ei=Eigenvectors[B]
Out[3]= {{-1, 9, 3}, {1, 3, 2}, {2, -1, 1}}
a1 = ei[[1]]
a2 = ei[[2]]
a3 = ei[[3]]
S = {{a1[[1]], a2[[1]], a3[[1]]}, {a1[[2]], a2[[2]], a3[[3]]}, {a1[[3]], a2[[3]], a3[[3]]}}
SI = Inverse[S] d = {{3, 0, 0}, {0, 2, 0}, {0, 0, 1}}
One square root is
r1 = S.d.SI
Out[25]= {{-21, -13, 31}, {54, 34, -75}, {6, 4, -7}}
We check:
r1.r1
Out[27]= {{-75, -45, 107}, {252, 154, -351}, {48, 30, -65}}
Another roots are obtained by choosing other diagonal matrices:
d = {{-3, 0, 0}, {0, 2, 0}, {0, 0, 1}}
r2 = S.d.SI
Out[27]= {{9, 5, -11}, {-216, -128, 303}, {-84, -50, 119}}
r2.r2
Out[28]= {{-75, -45, 107}, {252, 154, -351}, {48, 30, -65}}
d = {{-3, 0, 0}, {0, -2, 0}, {0, 0, 1}}
r3 = S.d.SI
Out[31]= {{57, 33, -79}, {-72, -44, 99}, {12, 6, -17}}
d = {{-3, 0, 0}, {0, 2, 0}, {0, 0, -1}}
r4 = S.d.SI
Out[33]= {{-27, -15, 37}, {-198, -118, 279}, {-102, -60, 143}}
Eigenvalues[r4]
Out[34]= {-3, 2, -1}
Sylvester's auxiliary matrices:
Z1 = (B - 4*IdentityMatrix[3]).(B - 9*IdentityMatrix[3])/(1 - 4)/(1 - 9)
Out[3]= {{18, 10, -24}, {-9, -5, 12}, {9, 5, -12}}
Z4 = (B - 1*IdentityMatrix[3]).(B - 9*IdentityMatrix[3])/(4 - 1)/(4 - 9)
Out[4]= {{-12, -7, 17}, {-36, -21, 51}, {-24, -14, 34}}
Z9 = (B - 1*IdentityMatrix[3]).(B - 4*IdentityMatrix[3])/(9 - 1)/(9 - 4)
Out[5]= {{-5, -3, 7}, {45, 27, -63}, {15, 9, -21}} Then one root can be calculated as
Z1 + 2*Z4 + 3*Z9
Out[6]= {{-21, -13, 31}, {54, 34, -75}, {6, 4, -7}}
We can build the following functions of the matrix:
Phi[t_]= Sin[t]*Z1 + Sin[2*t]/2*Z4 + Sin[3*t]/3*Z9
Psi[t_]= Cos[t]*Z1 +Cos[2*t]*Z4 +Cos[3*t]*Z9
Example 1.6.3: Consider the positive diagonalizable matrix with double eigenvalues
\[ {\bf A} = \begin{bmatrix} -20& -42& -21 \\ 6& 13&6 \\ 12& 24& 13 \end{bmatrix} \]
Indeed,
A={{-20, -42, -21}, {6, 13, 6}, {12, 24, 13}}
Eigenvalues[A]
{4, 1, 1}
Eigenvectors[A]
{{-7, 2, 4}, {-1, 0, 1}, {-2, 1, 0}}
R2 = Simplify[Inverse[L - A]]
Out= {{(-25 + \[Lambda])/((-4 + \[Lambda]) (-1 + \[Lambda])), -(42/( 4 - 5 \[Lambda] + \[Lambda]^2)), -(21/( 4 - 5 \[Lambda] + \[Lambda]^2))}, {6/( 4 - 5 \[Lambda] + \[Lambda]^2), (8 + \[Lambda])/( 4 - 5 \[Lambda] + \[Lambda]^2), 6/( 4 - 5 \[Lambda] + \[Lambda]^2)}, {12/( 4 - 5 \[Lambda] + \[Lambda]^2), 24/( 4 - 5 \[Lambda] + \[Lambda]^2), (8 + \[Lambda])/( 4 - 5 \[Lambda] + \[Lambda]^2)}}
We construct the exponential matrix and show that it satisfies the matrix differential equation
Db = {{4, 0, 0}, {0, 1, 0}, {0, 0, 1}}
Out= {{4, 0, 0}, {0, 1, 0}, {0, 0, 1}}
S2 = Transpose[Eigenvectors[B]]
Out= {{-7, -1, -2}, {2, 0, 1}, {4, 1, 0}}
expA = {{Exp[4*t], 0, 0}, {0, Exp[t], 0}, {0, 0, Exp[t]}}
Phi[t_] = S2.expB.Inverse[S2]
Simplify[D[Phi[t], t] - B.Ph[t]]
Example 1.6.4: Consider the positive defective matrix ???
\[ {\bf B} = \begin{bmatrix} -75& -45& 107 \\ 252& 154& -351\\ 48& 30& -65 \end{bmatrix} \]
References
Return to Mathematica page
Return to the main page (APMA0340)
Return to the Part 1 Matrix Algebra
Return to the Part 2 Linear Systems of Ordinary Differential Equations
Return to the Part 3 Non-linear Systems of Ordinary Differential Equations
Return to the Part 4 Numerical Methods
Return to the Part 5 Fourier Series
Return to the Part 6 Partial Differential Equations
Return to the Part 7 Special Functions