Polynomial interpolation is based on either the minimal polynomial or
characteristic polynomial. It allows us to define a matrix functions as a
matrix polynomial with coefficients evaluated according to required function.
This method does not depend whether matrix is diagonalizable or defective
(similar to the resolvent method).
Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the first course APMA0330
Return to the main page for the second course APMA0340
Return to Part I of the course APMA0340
Introduction to Linear Algebra with Mathematica
The idea of polynomial interpolation approach is based on Cayley--Hamiltom theorem that any square matrix is annihilated by its characteristic polynomial. If a minimal polynomial is known, then there is an advantage to use it instead of the characteristic polynomial.
Originally, polynomial interpolation and more general spectral decomposition method ware developed for symmetric or
self-adjoint matrices.
However, we expand our exposition for arbitrary square
matrices by considering their polynomial interpolation.
If a minimal polynomial or characteristic polynomial is a polynomial or degree m, then any power of matrix A that exceeds m can be expressed as a linear combination of previous powers: A0 = I, A¹ = A, A², … , Am-1. This allows us to to express an entire function of square matrix A as a polynomial containing only first m powers of A.
Our objective is to
define a function, f(A) of a square matrix
A for a certain class of functions.
A function f(λ) of a single variable
λ is said to be admissible
for a square matrix A if the values
exist. Here, mi is the multiplicity of the eigenvalue
λi, and there are s distinct eigenvalues. The above
values of the function f and possibly its derivatives at eigenvalues
are called the values of
f on the spectrum of A.
One of the main uses of matrix functions in computational mathematics
is for solving nonlinear matrix equations, such as
\( g ({\bf X} ) = {\bf A} . \) Two particular cases are especially important: to find a
square root and logarithm by solving the equations \( {\bf X}^2 = {\bf A} \) and
\( e^{\bf X} = {\bf A} , \) respectively. It may happen that a matrix equation has a solution
that is beyond the set of admissible functions. For instance, a unit matrix has infinite many roots out of which we can
construct only finite number of roots using admissible functions.
In most cases of practical interest, f is given by a formula, such as
\( f (\lambda ) = e^{\lambda\, t} \)
or
\( f (\lambda ) = \cos \left( t\,\sqrt{\lambda} \right) . \) However, the following definition of \( f ({\bf A} ) \) requires only the values of f on the
spectrum of A; it does not require any other information
about f. For every n × n matrix A,
there exists a set of admissible functions for each of which we can define a
matrix \( f ({\bf A} ) . \) Such definition of
n-by-n matrix \( f ({\bf A} ) \) is
not unique. Previously, we defined a function of a square matrix using the
resolvent method or
Sylvester method. There are known other
equivalent definitions that could be found in the references [1 -- 3].
In applications to systems of ordinary differential equations, we usually need to construct
\( f ({\bf A} ) \) for analytic functions such as \( f (\lambda ) = e^{\lambda\, t} \)
or \( \displaystyle f (\lambda ) = \frac{\sin \left( \sqrt{\lambda}\, t \right)}{\sqrt{\lambda}} . \)
In our exposition, we will assume that functions possess as many
derivatives as needed; when considering admissible functions, the
number of derivatives is the multiplicity minus one.
Let A be a square \( n \times n \)
matrix and let f be an analytic function in a neighborhood of each of
its eigenvalues. Upon changing the independent variable, the analytic function can be expanded into a convergent Maclaurin series
We do not discuss the convergence issues of series \eqref{EqSpectral.2} because we will
define a function of a square matrix as a matrix polynomial; so this series
serves for illustration only. Let ψ(λ) be a
minimal polynomial of degree m for the matrix
A. Then every power Ap (p ≥ m) of matrix
A can be expressed as a polynomial of degree not higher than
m - 1. Therefore,
where coefficients \( b_j , \quad j=0,1,\ldots , m-1; \)
should satisfy the following equations
\begin{equation} \label{EqSpectral.4}
f(\lambda_k ) = \sum_{j= 0}^{m-1} b_k \,\lambda_k^j , \qquad k =1,2,\ldots , s ,
\end{equation}
for each eigenvalue λk, k = 1, 2, … , s, where s is the number of
distinct eigenvalues. If the eigenvalue λk is of multiplicity mk in the minimal polynomial
ψ(λ) (so \( \psi (\lambda )/(\lambda - \lambda_k )^{m_k} \) is a polynomial), then we need to add mk - 1 auxiliary
algebraic equations
Note:
Our exposition and formula \eqref{EqSpectral.3} are based on Maclauring power series expansion \eqref{EqSpectral.1} of arbitrary admissible function. However, we used Maclaurin series for simplicity and generally speaking have to utilize Taylor series instead. Nevertheless, our formula \eqref{EqSpectral.3} is valid independently whether we use Maclaurin series or Taylor one. It is important that an admissible function is analytic at all eigenvalues of the corresponding matrix.
For example, a square root function \( r(\lambda ) = \sqrt{\lambda} \) is not holomorphic at λ = 0 because it is its branch point. However, the square root function is a holomorphic function at other points on the complex plane ℂ. Therefore, you can use formula \eqref{EqSpectral.3} for defining a square root of a matrix if it is not a singular matrix (has no zero eigenvalue).
■
It is instructive to consider the case
where n-by-n matrix A has
rank 1 (making it a rank-1 matrix), so A is a matrix product of two
n-vectors: \( {\bf A} = {\bf u}\,{\bf
v}^{\ast} . \) Both vectors, u and v, have
dimension n × 1, so v* has dimension 1
× n. Then their product will be an n
× n matrix. In this case of matrix \( {\bf A} = {\bf u}\,{\bf
v}^{\ast} , \)
a function of rank-1 matrix is the sum of two terms:
Since u and v are assumed to be n × 1 column
vectors, then their product \( {\bf v}^{\ast} {\bf u} \)
is a number. When admissible function f and its derivative are
defined at the origin and \( {\bf v}^{\ast} {\bf u} =0, \)
the function of a matrix can be designated as
where, by definition, \( {\bf I} = {\bf A}^0 \) is the identity
matrix.
Example 1:
Consider two vectors: \( {\bf u} = \langle 1,1,1 \rangle \)
and \( {\bf v} = \langle 1,0,3 \rangle . \) Then
their products \( {\bf u} {\bf v}^{\ast} \)
and \( {\bf v}^{\ast} {\bf u} \)
define the matrix and the number
respectively. This matrix A has one simple eigenvalue
λ = 4 and another one double eigenvalue λ = 0 .
Mathematica confirms:
A = {{1, 0, 3}, {1, 0, 3}, {1, 0, 3}}
ss = Eigenvalues[A]
{4, 0, 0}
Eigenvectors[A]
{{1, 1, 1}, {-3, 0, 1}, {0, 1, 0}}
Mathematica has a dedicated command to determine a characteristic
polynomial for a square matrix:
CharacteristicPolynomial[A, x]
4 x^2 - x^3
Since matrix A is diagonalizable, it is a derogatory matrix
and its minimal polynomial is of second degree:
\( \psi (\lambda ) = \lambda \left( \lambda -4 \right) . \)
We build two matrix functions
that correspond to single-valued functions \( \Phi (\lambda ) =
\frac{\sin \left( t \, \sqrt{\lambda} \right)}{\sqrt{\lambda}} \quad\mbox{and} \quad
\Psi (\lambda ) = \cos \left( t \, \sqrt{\lambda} \right) , \) respectively. Using the formula for
rank-1 matrices, we get
To verify that we specify matrix functions correctly, we prove that they are solutions of the following initial value problems
(because these matrix problems have unique solutions):
This confirms that Mathematica, with its dedicated command MatrixExp, indeed can find the correct answer.
■
Example 2:
Consider the \( 3 \times 3 \) matrix
\( {\bf A} = \begin{bmatrix} \phantom{-2}1 & \phantom{-1}4&16 \\ \phantom{-}18 & \phantom{-}20 & \phantom{-}4 \\ -12&-14&-7 \end{bmatrix} \)
that has three distinct eigenvalues; Mathematica confirms
A = {{1,4,16},{18,20,4},{-12,-14,-7}}
Eigenvalues[A]
Out[2]= 9, 4, 1
Then we try to solve what seems to be an impossible problem of finding square roots of the given matrix. Upon introducing the root function
\( r(\lambda ) = \sqrt{\lambda} = \lambda^{1/2} , \)
we see that this function has no Maclaurin expansion. Nevertheless, we try to apply the polynomial interpolation approach. So we represent this function as
By choosing one of possible values for a square root (which assigns two outputs), we can construct eight possible square matrix root. We don't know whether there exists another square matrix root, but at least we can find eight of them. We dedicate problem for solving system of algebraic equations to Mathematica in hope that it is capable to assist us:
We verify that the minimal polynomial is a product of simple terms.
(A - IdentityMatrix[3]).(A - 4*IdentityMatrix[3])
Out[2]=
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
To build matrix-functions using polynomial interpolation, we have two options: either to use the characteristic polynomial χ(λ) = det(λI - A) = (λ-1)²(λ-4) or to use the minimal polynomial ψ(λ) = (λ-1)(λ-4).
Of course, the latter is the easiest way, but we show both options.
Let us start with the characteristic polynomial and define the exponential function accordingly:
Note that terms containing the multiple tet are canceled out. Now we use the minimal polynomial and represent the same exponential function as the sum of two terms:
We show all detail for the former functions and formulate outputs for the latter. First, we use the characteristic polynomial and represent the corresponding matrix-function as
Although matrix A has several square roots, we outline application of polynomial interpolation to determine one of them. Using the minimal polynomial, we seek square roots of matrix A in the form
\[
\sqrt{\bf A} = d_0 {\bf I} + d_1 {\bf A} ,
\]
where coefficients are determined from the system of equations:
Therefore, A is a derogatory diagonalizable matrix because
it has 4 linearly independent eigenvectors. In this case, its minimal
polynomial is a product of simple terms. For the exponential function
\( f (\lambda ) = e^{\lambda\, t} \)
we construct the corresponding matrix function \( f ({\bf A} ) = e^{{\bf A}\, t} \) as a
polynomial of degree 2 (because its minimal polynomial ψ(λ) =
λ(λ - 2)(λ + 1) is of degree 3):
However, it is not actual verification because we compare our constructed
exponential matrix with another matrix provided by Mathematica. Of
course, we trust Mathematica, but we need real verification. It is
known that matrix exponential \( {\bf U}(t) ) = e^{{\bf A}\, t} \) is a unique solution
of the following initial value problem for matrix differential equation:
We don't need to define the second complex eigenvalue because it is a complex
conjugate to the first one. Since the minimal polynomial for matrix A
has degree 3, we seek exponential matrix function in the form:
coincides with the minimal polynomial, the given matrix is nonderogatory.
We are going to determine two matrix functions \( \displaystyle {\bf \Phi} (t) = \frac{\sin \left( t\,\sqrt{\bf A} \right)}{\sqrt{\bf A}} \)
and \( \displaystyle {\bf \Psi} (t) = \cos \left( t\,\sqrt{\bf A} \right) \) corresponding to
functions \( \displaystyle \Phi \left( \lambda \right) = \frac{\sin \left( t\,\sqrt{\lambda} \right)}{\sqrt{\lambda}} \)
and \( \displaystyle \Psi \left( \lambda \right) = \cos \left( t\,\sqrt{\lambda} \right) , \)
respectively.
Let us start with \( \displaystyle {\bf \Phi} \left( {\bf A} \right) , \) which does not depend
on what branch of square root is chosen in \( \displaystyle \frac{\sin \left( t\,\sqrt{\bf A} \right)}{\sqrt{\bf A}} . \)
We seek this matrix function in the form
Note that matrices A and \( \displaystyle {\bf \Phi} (t) \) commute.
To verify that we get a correct matrix function, we have to show that the function
\( \displaystyle {\bf \Phi} (t) \)
is a solution to the initial value problem
Now we build another matrix function \( \displaystyle {\bf \Psi} (t) = \cos \left( t\,\sqrt{\bf A} \right) \)
using exactly the same steps. So we seek it in the form
Here \( \displaystyle \Psi \left( \lambda \right) = \cos \left( t\,\sqrt{\lambda} \right) . \)
We solve the system of algebraic equations using Mathematica:
psi[s_] = Cos[t*Sqrt[s]]
D[psi[s], s] /. s -> 4
-(1/4) t Sin[2 t]
D[psi[s], s, s] /. s -> 4
-(1/16) t^2 Cos[2 t] + 1/32 t Sin[2 t]
This allows us to determine the values of coefficients b0,
b1, and b2 in the formula
Each of constructed matrix functions either \(
\displaystyle {\bf \Phi} (t) = \frac{\sin \left( t\,\sqrt{\bf A} \right)}{\sqrt{\bf A}} \)
or \( \displaystyle {\bf \Psi} (t) = \cos \left( t\,\sqrt{\bf A} \right) \)
is unique independently of the possible method in use because they are
solutions of the corresponding initial value problems.
On the other hand, we cannot guarantee that the square roots of A that we are going to find are the only ones: there could be other roots.
To determine a square root of the given matrix, we consider a corresponding
function
\( r(\lambda ) = \lambda^{1/2} \equiv \sqrt{\lambda} , \) where we have to choose a
particular branch because a square root is an analytic function but not a
single-valued function: it assigns to every input number two outputs.
For example,
\( 4^{1/2} = \pm 2 . \) Since the
characteristic polynomial of matrix A is of degree 3, we assume that
\( r({\bf A} ) \) is represented as
The matrix has two eiegenvalues \( \lambda_1 =1 \)
and \( \lambda_2 =4 . \)
The former has multiplicity 2 and its geometric multiplicity is also 2 because
there are two linearly independent eigenvectors:
A = A = {{2, 1, 1}, {1, 2, 1}, {1, 1, 2}}
Eigenvalues[A]
{4, 1, 1}
Eigenvectors[A]
{{1, 1, 1}, {-1, 0, 1}, {-1, 1, 0}}
So the minimal polynomial is of second degree: \( \psi
\left( \lambda \right) = \left( \lambda -1 \right) \left( \lambda -4 \right) ,
\) and the given matrix is derogatory. Therefore, we seek any function
of matrix A as a polynomial of first degree:
\( f({\bf A}) = b_0 {\bf I} + b_1 {\bf A} . \) In
particular, if we consider a square root
\( f(\lambda ) = \sqrt{\lambda} , \) we get the
linear system of two algebraic equations
Two other roots are just negative of these two: \( {\bf R}_3 = - {\bf R}_1 , \quad {\bf R}_4 = - {\bf R}_2 . \)
To verify this statement, we ask Mathematica
R2.R2
{{2, 1, 1}, {1, 2, 1}, {1, 1, 2}}
A similar answer we obtain for each matrix root. Note that matrix A has other square roots that cannot
be obtained with our method:
Higham, Nicholas J. Functions of Matrices: Theory and Computation. Cambridge University Press, Cambridge, 2008.
Leonard, I.E., The Matrix Exponential, SIAM Review, vol 38, No 3, 507--512, 1996.
Moler, Cleve, and van Loan, Charles, Nineteen Dubious Ways to compute the Exponential of a Matrix, Twenty-Five Years Later, SIAM Review, vol. 45, No. 1, 3--49, 2003.
Return to Mathematica page
Return to the main page (APMA0340)
Return to the Part 1 Matrix Algebra
Return to the Part 2 Linear Systems of Ordinary Differential Equations
Return to the Part 3 Non-linear Systems of Ordinary Differential Equations
Return to the Part 4 Numerical Methods
Return to the Part 5 Fourier Series
Return to the Part 6 Partial Differential Equations
Return to the Part 7 Special Functions