Originally, spectral decomposition was developed for symmetric or self-adjoint matrices. Following tradition, we present this method for symmetric/self-adjoint matrices, and later expand it for arbitrary matrices.
A square matrix A is said to be unitary diagonalizable if there is a unitary matrix U such that
\( {\bf U}^{\ast} {\bf A}\,{\bf U} = {\bf \Lambda} , \) where Λ is a diagonal matrix
and \( {\bf U}^{\ast} = {\bf U}^{-1} . \)
A matrix A is said to be orthogonally diagonalizable if there is an orthogonal matrix P such that
\( {\bf P}^{\mathrm T} {\bf A}\,{\bf P} = {\bf \Lambda} , \) where Λ is a diagonal matrix and
\( {\bf P}^{\mathrm T} = {\bf P}^{-1} . \)
Theorem: Let A be a square n × n matrix.
Matrix A is orthogonally diaginalisable if and only if
A is symmetric
(\( {\bf A} = {\bf A}^{\mathrm T} \) ).
The matrix A is unitary diaginalisable if and only if A is normal
(\( {\bf A}\, {\bf A}^{\ast} = {\bf A}^{\ast} {\bf A} \) ). ■
that has the characteristic polynomial \( \chi_{A} (\lambda ) = \det \left( \lambda {\bf I} - {\bf A} \right) = \left( \lambda -1 \right)^2 \left( \lambda -4 \right) .\)
Thus, the distinct eigenvalues of A are \( \lambda_1 =1, \) which has geometrical multiplicity 2, and \( \lambda_3 =4. \)
The corresponding eigenvectors are
The vectors \( {\bf u}_1 , \ {\bf u}_2 \) form the basis for the two-dimensional eigenspace corresponding
\( \lambda_1 =1 , \) while \( {\bf u}_3 \) is the eigenvectors corresponding to
\( \lambda_3 =4 . \) Applying the Gram--Schmidt process to \( {\bf u}_1 , \ {\bf u}_2 \) yields
the following orthogonal basis:
Theorem: Let A be a symmetric or normal square n × n matrix,
with eigenvalues \( \lambda_1 , \ \lambda_2 , \ \ldots , \ \lambda_n \) and
corresponding eigenvectors \( {\bf u}_1 , \ {\bf u}_2 , \ \ldots , \ {\bf u}_n . \) Then
which is called spectral decomposition for a symmetric/ normal matrix A. ■
The term was cointed around 1905
by a German mathematician David Hilbert (1862--1943).
A substantial part of Hilbert’s fame rests on a list of 23 research problems he enunciated in 1900 at the
International Mathematical Congress in Paris. In his address, “The Problems of Mathematics,” he surveyed nearly all
the mathematics of his day and endeavoured to set forth the problems he thought would be significant for
mathematicians in the 20th century. Many of the problems have since been solved, and each solution was a noted event.
He reduced geometry to a series of axioms and contributed substantially to the establishment of the formalistic
foundations of mathematics. His work in 1909 on integral equations led to 20th-century research in functional analysis.
His work also established the basis for his work on infinite-dimensional space, later called Hilbert space, a
concept that is useful in mathematical analysis and quantum mechanics.
In 1895, Hilbert became Professor of Mathematics at the University of Göttingen, which
was the 20th century global hub of renowned mathematicians. It was here that he enjoyed the company of notable
mathematicians.
Under Hilbert, Göttingen reached its peak as one of the great mathematical centres of the world. No one in recent
years has surpassed his dual capacity, for seeing and overcoming the central difficulty of some major topic, and for
propounding new problems of vital importance.
On January 23, 1930, David Hilbert reached the mandatory retirement age of 68. Among the many honours bestowed upon
him, he was made an "honorary citizen" of his native town of Königsberg (now Kaliningrad, Russia). He continued
working as co-editor of Mathematische Annalen until 1939. The last years of Hilbert’s life and of many of his
colleagues and students was overshadowed by the Nazi rule. Hilbert courageously spoke out against repression of Jewish
mathematicians in Austria and Germany in mid 1930s. However, after mass evictions, several suicides, and
assassinations, he eventually remained silent.
Denoting the rank one projection matrix \( {\bf u}_i {\bf u}_i^{\ast} \)
by \( {\bf E}_i = {\bf u}_i {\bf u}_i^{\ast} , \) we obtain spectral decomposition of A:
and they all have eigenvalues \( \lambda = 1, 0, 0 . \)
Using this spectral decomposition, we define two matrix-functions corresponding to \( {\Phi}(\lambda ) = \cos \left( \sqrt{\lambda} \,t \right) \) and
\( {\Psi}(\lambda ) = \frac{1}{\sqrt{\lambda}} \,\sin \left( \sqrt{\lambda} \,t \right) \) that do not depend on the branch of the choisen square root:
and four others are just negative of these four; so total number of square roots is 8. Note that we cannot obtain
\( {\bf R}_3 \) and \( {\bf R}_4 \) using neither Sylvester's method nor the Resolvent method because they are based on the minimal polynomial
\( \psi (\lambda ) = (\lambda -1)(\lambda -4) . \) ■
Frame, J.S., Matrix functions and applications, Reprint from March-July, 1964 issues of IEEE Spectrum. Parts II and IV.
Gantmacher, F.G., The Theory of matrices, Vol. 1, Chelsea Publishing Company, New York, 1959.
Ikebe, Y. and Inagaki, T., An Elementary Approach to the Functional Calculus
for Matrices, The American
Mathematical Monthly, 1986, 93, No. 5, pp. 390--392; doi: 10.2307/2323606
https://www.jstor.org/stable/2323606
London, R.R. and Rogosinski, H.P., Decomposition Theory in the Teaching of Elementary Linear Algebra, The American
Mathematical Monthly, 1990, 97, No. 6, pp. 478--485; doi: 10.2307/2323830
https://www.jstor.org/stable/2323830
South, J.C., Note on the matrix functions sin πA and cos πA, Mathematics Magazine, 1966, Vol. 39, No. 5, pp. 287--288.
Wilansky, A., Spectral decomposition of matrices for high school students, Mathematics Magazine, 1968, Vol. 41, No. 1, pp. 51--59.
Wilansky, A., Correction for Spectral decomposition of matrices for high school students, Mathematics Magazine, 1968, Vol. 41, No. 5, p. 248.