This section presents a remarkable formula, discovered by James
Sylvester in 1883, used for defining a function of a square matrix. The beauty of this formula is based on generalization of the Lagrange interpolation polynomials for matrices that are now called Sylvester's auxiliary matrices. The Sylvester formula provides an efficient way to represent a function of a square diagonalizable matrix as a linear combination of auxiliary matrices. It turns out that these Sylvester's auxiliary matrices are projection matrices on eigenspaces for the given matrix. Although the Sylvester formula is applicable
only for diagonalizable matrices, it has an extension due to Arthur Buchheim (1886) that covers the general case, but it loses its beauty and simplicity.
Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the first course APMA0330
Return to the main page for the second course APMA0340
Return to Part I of the course APMA0340
Introduction to Linear Algebra with Mathematica
To understand the Sylvester formula, we need to introduce some important definitions
and statements, including the famous Cayley--Hamilton theorem.
Since our objective is to define a function of a square matrix, it makes sense to start with the simplest class of functions---a set of polynomials.
We single out polynomials because for each such
function \( q \left(
\lambda \right) = q_0 + q_1 \lambda + q_2 \lambda^2 + \cdots + q_n
\lambda^n \) we can naturally assign a matrix
\( q \left( {\bf A} \right) = q_0 {\bf I} + q_1
{\bf A} + q_2 {\bf A}^2 + \cdots + q_n {\bf A}^n . \) It is obvious
that multiplication of square matrices and their addition/subtraction is well
defined and understood.
A scalar polynomial \( q \left( \lambda \right) \) is called an annulled polynomial
(or annihilating polynomial) of the square matrix A, if \( q \left( {\bf A} \right) = {\bf 0} , \)
with the understanding that \( {\bf A}^0 = {\bf I} , \) the identity matrix, replaces
\( \lambda^0 =1 \) when substituting
A for λ.
The minimal polynomial of a square
matrix A is a unique monic polynomial ψ of lowest
degree such that
\( \psi \left( {\bf A} \right) = {\bf 0} . \)
Every square matrix has a minimal polynomial.
A square matrix A for which the characteristic
polynomial \( \chi (\lambda ) = \det \left( \lambda
{\bf I} - {\bf A} \right) \) and the minimal polynomial are the same
is called a nonderogatory matrix (where I
is the identity matrix). A derogatory matrix
is one where the characteristic polynomial and minimal polynomial are not the same.
A n×n matrix with n distinct eigenvalues is
a nonderogatory matrix. However, if a diagonalizable matrix A has multiple eigenvalues (they are multiple roots of the algebraic equation det(λI − A) = 0) in λ), it is derogatory.
Theorem: (Cayley--Hamilton)
Every square matrix A is annulled by its
characteristic polynomial, that is, \( \chi ({\bf A}) = {\bf 0} , \) where
\( \chi (\lambda ) = \det \left( \lambda {\bf I} - {\bf A} \right) . \) ▣
Arthur Cayley
(1821--1895) was a British mathematician, and a founder
of the modern British school of pure mathematics. As a child, Cayley enjoyed
solving complex math problems for amusement. He entered Trinity College,
Cambridge, where he excelled in Greek, French, German, and Italian, as well as
mathematics. He worked as a lawyer for 14 years. During this period of his
life, Cayley produced between two and three hundred mathematical papers. Many of his
publications were results of collaboration with his friend J. J. Sylvester.
Sir William Rowan Hamilton (1805--1865) was an Irish physicist, astronomer,
and mathematician, who made important contributions to classical mechanics,
optics, and algebra. He was the first foreign member of the American National
Academy of Sciences. Hamilton had a remarkable ability to learn languages,
including modern European languages, Hebrew, Persian, Arabic, Hindustani,
Sanskrit, and even Marathi and Malay. Hamilton was part of a small but
well-regarded school of mathematicians associated with Trinity College in
Dublin, which he entered at age 18. Strangely, the credit for discovering
the quantity now called the Lagrangian and Lagrange's equations belongs to Hamilton. In 1835, being secretary to the meeting of the British Science Association
which was held that year in Dublin, he was knighted by the lord-lieutenant.
James Joseph Sylvester (1814--1897) was an English mathematician. He made fundamental
contributions to matrix theory, invariant theory, number theory, partition
theory, and combinatorics. Sylvester was born James Joseph in London, England,
but later, when his older brother changed it to "Sylvester" upon
emigration to the United States, James followed suit.
At the age of 14, Sylvester was a student of
Augustus De Morgan at the University of London. His family withdrew him from
the University after he was accused of stabbing a fellow student with a knife.
Subsequently, he attended the Liverpool Royal Institution.
However, Sylvester was not issued a degree, because graduates at that time
were required to state their acceptance of the Thirty-Nine Articles of the
Church of England, and Sylvester could not do so because he was Jewish. The
same reason was given in 1843 for his being denied appointment as Professor of
Mathematics at Columbia College (now University) in New York City. After, for the same reason, he was unable to compete for a Fellowship or obtain a
Smith's prize. In 1838
Sylvester became professor of natural philosophy at University College London and in 1839 a Fellow of the Royal
Society of London. In 1841, he was awarded a BA and an MA by Trinity College, Dublin. In the same year he moved to
the United States to become a professor of mathematics at the University of Virginia, but left after less than four
months following a violent encounter with two students he had disciplined. He moved to New York City and began
friendships with the Harvard mathematician Benjamin Peirce and the Princeton physicist Joseph Henry, but in
November 1843, after his rejection by Columbia, he returned to England.
For long period of time, 1843--1855, James Sylvester supported his living by
developing actuarial models, practicing law and being a music tutor.
One of Sylvester's lifelong passions was for poetry; he read and translated works from the original French, German,
Italian, Latin and Greek, and many of his mathematical papers contain illustrative quotes from classical poetry. In
1876, Sylvester again crossed the Atlantic Ocean to become the inaugural professor of mathematics at the new
Johns Hopkins University in Baltimore, Maryland. His salary was $5,000 (quite generous for the time), which he
demanded be paid in gold. After negotiation, an agreement was reached on a salary that was not paid in gold. In
1878, he founded the American Journal of Mathematics. In 1883, he returned to England to take up the Savilian
Professor of Geometry at Oxford University. He held this chair until his death. Sylvester invented a great number
of mathematical terms such as "matrix" (in 1850), "graph" (combinatorics) and "discriminant." He coined the term
"totient" for Euler's totient function.
Privately, Sylvester suffered from depression for almost
three years, during which he completed no mathematical research but
then Pafnuty
Chebyshev visited London and the two discussed mechanical
linkages which can be used to draw straight lines. After working on
this topic, Sylvester began giving an evening lecture titled "On recent discoveries in mechanical conversion of motion" at the Royal Institution. One mathematician in the audience at this lecture was Alfred Kempe and he became absorbed by this topic. Kempe and Sylvester worked jointly on linkages and made important discoveries.
✶
Theorem 2:
For polynomials p and q and a square matrix A,
\( p \left( {\bf A} \right) = q \left( {\bf A}
\right) \) if and only if p and q take the same
values on the spectrum (set of all eigenvalues) of A.
Theorem 3:
The minimal polynomial ψ of a square matrix A divides
any other annihilating polynomial p for which
\( p \left( {\bf A} \right) = {\bf 0} . \)
In particular, the minimal polynomial divides the characteristic polynomial
of A. ▣
From Theorem 3, it follows that the minimal polynomial
is always a product of monomials\( \lambda -
\lambda_i \) for each eigenvalue λi
because the characteristic polynomial contains all such
multiples. So all eigenvalues must be present in the product form
for a minimal polynomial.
Theorem 4:
The minimal polynomial for a
linear operator on a finite-dimensional vector space is unique.
Theorem 5:
A square n×n matrix A is diagonalizable if and only if its
minimal polynomial is a product of simple terms:
\( \psi (\lambda ) = \left( \lambda - \lambda_1 \right)
\left( \lambda - \lambda_2 \right) \cdots \left( \lambda - \lambda_r
\right) , \) where λ1, … ,
λr are the distinct eigenvalues of matrix A (r≤n).
Therefore, a square matrix A is not diagonalizable if
and only if its minimal polynomial contains at least one multiple
\( \left( \lambda - \lambda_1 \right)^m \)
for m > 1.
Theorem 6:
Every matrix that commutes with a
square matrix A is a polynomial in A if and
only if A is nonderogatory.
Theorem 7:
If A is upper
triangular and nonderogatory, then any solution X of the
equation \( f \left( {\bf X} \right) = {\bf A} \)
is upper triangular. ▣
In Mathematica, one can define the minimal polynomial using
the following code:
Then to find a minimal polynomial of previously defined square matrix
A, just enter
MatrixMinimalPolynomial[A, x]
Suppose that A is a diagonalizablen×n matrix; this means
that all its eigenvalues are not defective and there exists a basis of n linearly independent eigenvectors.
Then its minimal polynomial (the polynomial of least possible degree that annihilates the matrix) is a product of
simple terms:
where \( \lambda_1 , \lambda_2 , \ldots , \lambda_k \) are distinct eigenvalues of
matrix A ( \( k \le n \) ).
Let \( f(\lambda ) \) be a function defined on the spectrum of the matrix A. We assume that
every eigenvalue λi is in the domain of f, and that every eigenvalue λi with multiplicity mi > 1 is in the interior of the domain, with f being (mi — 1) times differentiable at λi. We build a function
\( f\left( {\bf A} \right) \) of diagonalizable square matrix A according to James Sylvester, who was an English lawyer and music tutor before his appointment as a professor of mathematics in 1885. To define a function of a square matrix, we need to construct k
Sylvester auxiliary matrices for each distinct eigenvalue \( \lambda_i , \quad i= 1,2,\ldots k: \)
Sylvester's auxiliary matrix \( {\bf Z}_i \) is
actually the
Lagrange interpolating polynomial. The denominator in Eq.\eqref{EqSylvester.2} is actually the derivative of the minimal polynomial evaluated at λi:
Theorem 8:
Each Sylvester's matrix \( {\bf Z}_i \) is a
projection matrix on the eigenspace of the corresponding eigenvalue, so
\( {\bf Z}_i^2 = {\bf Z}_i , \ i = 1,2,\ldots ,
r. \) Moreover, these auxiliary matrices are orthogonal, so \( {\bf Z}_i {\bf Z}_j = {\bf 0} \) for i ≠ j. A Sylvester's matrix \( {\bf Z}_i \) is an eigenmatrix corresponding to the eigenvalue λi:
respectively. Here the superscript "T" represents transposition because it is a custom to use column vectors. Since the minimal polynomial is \( \psi (\lambda ) = (\lambda -3)(\lambda +1) \) is a product of simple factors, we build Sylvester's auxiliary matrices:
Zm1 = (A - 3*IdentityMatrix[2])/(-4)
Z3 = (A + IdentityMatrix[2])/4
Illustrating Eq.\eqref{EqSylvester.3} with functions f(λ) defined on the spectrum of matrix A, we consider
\( U(\lambda ) = e^{\lambda \,t} \) and \( \psi (\lambda ) = \cos \left( \lambda \,t\right) , \) which are functions of a variable λ depending on a real parameter t that is usually associated with time.
Indeed,
Now we demonstrate that these auxiliary matrices are mutually orthogonal by
multiplying them to get a product to be zero.
Z1.Z4
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
Z1.Z1 - Z1
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
Z1.Z9
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
Z4.Z4 - Z4
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
Z9.Z4
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
Z9.Z9 - Z9
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
With Sylvester's matrices, we are able to construct arbitrary admissible
functions of
the given matrix. We start with the square root function
\( r(\lambda ) = \sqrt{\lambda} , \) which is actually comprised of
two branches \( r(\lambda ) = \sqrt{\lambda} \)
and \( r(\lambda ) = -\sqrt{\lambda} . \) Indeed,
choosing a particular branch, we get
The other four roots are the negatives of these matrices.
We consider two other functions \(
\displaystyle \Phi (\lambda ) =
\frac{\sin \left( t\,\sqrt{\lambda} \right)}{\sqrt{\lambda}} \) and
\( \displaystyle \Psi (\lambda ) = \cos \left(
t\,\sqrt{\lambda} \right) . \) It is not a problem to
determine the corresponding matrix functions
These two matrix functions are solutions of the matrix differential equation
\( \displaystyle \ddot{\bf P} (t) + {\bf A}\,{\bf P} (t)
= {\bf 0} \) subject to
the initial conditions
to verify that the minimal polynomial is a product of simple terms. Recall that the denominator in a resolvent (upon reducing all common multiples) is always the minimal polynomial.
These two matrix functions are solutions of the matrix differential equation
\( \displaystyle \ddot{\bf P} (t) + {\bf A}\,{\bf P} (t) = {\bf 0} \) subject to
the initial conditions
These two complex auxiliary matrices are also projections: \( {\bf Z}_{1+2{\bf j}}^2 =
{\bf Z}_{1+2{\bf j}}^3 = {\bf Z}_{1+2{\bf j}} , \) and similar relations for its complex conjugate.
Now suppose you want to build a matrix function corresponding to the
exponential function \( U(\lambda ) = e^{\lambda \,t} \)
of variable λ depending on a real parameter t. Using
Sylvester's auxiliary matrices, we have
Using Euler's formula, \( e^{{\bf j}\theta} = \cos
\theta + {\bf j}\,\sin \theta , \)
we multiply the exponential function by a complex number and obtain
Parshal, Karen Hunger Parshall,
James Joseph
Sylvester. Jewish Mathematician in a Victorian World.
Johns Hopkins University Press,
Baltimore, MD, USA, 2006. xiii+461 pp.
ISBN 0-8018-8291-5.
Return to Mathematica page
Return to the main page (APMA0340)
Return to the Part 1 Matrix Algebra
Return to the Part 2 Linear Systems of Ordinary Differential Equations
Return to the Part 3 Non-linear Systems of Ordinary Differential Equations
Return to the Part 4 Numerical Methods
Return to the Part 5 Fourier Series
Return to the Part 6 Partial Differential Equations
Return to the Part 7 Special Functions