There are two reasons to use the sum of two vector spaces. One of them is the way to build new vector spaces from old
ones. Another reason is to decompose the known vector space into sum of two (smaller) spaces. Since we consider linear
transformations between vector spaces, these sums lead to representations of these linear maps and
corresponding matrices into forms that reflect these sums. In many very important situations, we start with a vector
space V and can identify subspaces “internally” from which the whole space V can be built up using the
construction of sums. However, the most fruitful results we obtain for a special sum, called the direct sum.
Let A and B be nonempty subsets of a vector space V. The sum of A and B,
denoted A + B, is the set of all possible sums of elements from both subsets:
\( A+B = \left\{ a+b \, : \, a\in A, \ b\in B \right\} . \)
Direct Sums
A vector space V is called the direct sum of V1 and V2
if V1 and V2 are subspaces of V such that
\( V_1 \cap V_2 = \{ 0 \} \) and \( V_1 + V_2 = V. \) This
means that every vector v from V is uniquely represented via sum of two vectors
\( {\bf v} = {\bf v}_1 + {\bf v}_2 , \quad {\bf v}_1 \in V_1 , \ {\bf v}_2 \in V_2 . \)
We denote that V is the direct sum of V1 and V2 by writing
\( V = V_1 \oplus V_2 . \)
A complement to a subspace of a vector space is another subspace which forms a direct sum. Two such spaces are mutually complementary. , that is: Equivalently, every element of V can be expressed uniquely as a sum of an element of U and an element of W.
Suppose that an n-dimensional vector space V is the direct sum of two subspaces \( V = U\oplus W . \)
Let \( {\bf e}_1 , {\bf e}_2 , \ldots , {\bf e}_k \) be a basis of the linear subspace
U and let \( {\bf e}_{k+1} , {\bf e}_{k+2} , \ldots , {\bf e}_n \) be a basis of the linear subspace
W. Then \( {\bf e}_1 , {\bf e}_2 , \ldots , {\bf e}_n \) form a basis of the whole
linear space V. Any linear transformation written in this basis has a matrix representation:
Therefore, the block diagonal matrix A is the direct sum of two matrices of lower sizes.
Example 1: Consider the Cartesian plane \( \mathbb{R}^2 , \)
when every element is represented by an ordered pair v = (x,y). This vector has a unique decomposition
\( {\bf v} = (x,y) = {\bf v}_1 + {\bf v}_2 = (x,0) + (0,y) , \) where vectors (x,0) and
(0,y) can be identified with a one-dimensional space \( \mathbb{R}^1 = \mathbb{R} . \)
If we choose two arbitrary not parallel vectors u and v on the plane, then spans of these vectors generate two
vectors spaces that we denote by U and V, respectively. Therefore, U and V are two lines
containing vectores u and v, respectively. Their sum, \( U + V = \left\{ {\bf u} +{\bf v} \,: \
{\bf u} \in U, \ {\bf v} \in V \right\} \) is the whole plane \( \mathbb{R}^2 . \)
Example 1: Let E denote the set of all polynomials of even powers:
\( E = \left\{ a_n t^{2n} + a_{n-1} t^{2n-2} + \cdots + a_0 \right\} , \) and O
be the set of all polynomails of odd powers: \( O = \left\{ a_n t^{2n+1} + a_{n-1} t^{2n-1} + \cdots + a_0 t \right\} . \)
Then the set of all polynomials P is the direct sum of these sets: \( P = O\oplus E . \)
It is easy to see that any polynomial (or function) can be ubiquely decomposed into direct sum of even and odd counterparts:
Example 3: Let us consider the set M of all real (or complex) \( m \times n \)
matrices, and let \( U = \left\{ {\bf A} = \left[ a_{ij} \right] :\, a_{ij} =0 \ \mbox{ for } i > j\right\} \)
be the set of upper triangular matrices, and let
\( W = \left\{ {\bf A} = \left[ a_{ij} \right] :\, a_{ij} =0 \ \mbox{ for } i \le j\right\} \)
be the set of lower triangular matrices. Then \( M = U \oplus W . \) ■
End of Example 3
Before formulating the Primary Decomposition Theorem, we need to recall some definitions and facts that were explained in other sections.
We remind that the minimal polynomial of a square matrix A (or corresponding lineat transformation)
is the (unique) monic polynomial ψ(λ) of least degree that annihilates the matrix A, that is
ψ(A) = 0. The minimal polynomial\( \psi_u (\lambda ) \) of a
vector \( {\bf u} \in V \ \mbox{ or } \ {\bf u} \in \mathbb{R}^n \) relative to A is the
monic polynomial of least degree such that
\( \psi_u ({\bf A}) {\bf u} = {\bf 0} . \) It follows that \( \psi_u (\lambda ) \)
divides the minimal polynomial ψ(λ) of the matrix A. There exists a vector
\( {\bf u} \in V (\mathbb{R}^n ) \) such that
\( \psi_u (\lambda ) = \psi (\lambda ) . \) This result can be proved by representing
the minimal polynomial as the product of simple terms to each of which corresponds a subspace. Then the original vector
space (or \( \mathbb{R}^n \) ) is the direct sum of these subspaces.
A subspace U of a vector space V is said to be T-cyclic with respect to a linear transformation T : V ⇾ V
if there exists a vector \( {\bf u} \in U \)
and a nonnegative integer r such that \( {\bf u}, T\,{\bf u} , \ldots , T^r {\bf u} \)
form a basis for U. Thus, for the vector u if the degree of the minimal polynomial
\( \psi_u (\lambda ) \) is k, then
\( {\bf u}, T\,{\bf u} , \ldots , T^{k-1} {\bf u} \) are linearly independent and the space
U is spanned by these k vectors is T-cyclic.
Theorem (Primary Decomposition Theorem):
Let V be an
n-dimensional vector space (n is finite) and T is a linear transformation on V.
Then V is the direct sum of T-cyclic subspaces. ■
Let k be the degree of the minimal polynomial ψ(λ) of transformation T (or corresponding matrix
written is specified basis), and let u be a vector in V with
\( \psi_u (\lambda ) = \psi (\lambda ) . \) Then the space U spanned by
\( {\bf u}, T{\bf u} , \ldots , T^{k-1} {\bf u} \) is T-cyclic. We shall prove that if
\( U \ne V \quad (k \ne n), \) then there exists a T-invariant subspace W such
that \( V = U\oplus W . \) Clearly, by induction on the dimension, W will then be the
direct sum of T-cyclic subspaces and the proof is complete.
To show the existence of W enlarge the basis
\( {\bf e}_1 = {\bf u}, {\bf e}_2 = T{\bf u} , \ldots , {\bf e}_k = T^{k-1} {\bf u} \) of
U to a basis \( {\bf e}_1 , {\bf e}_2 , \ldots , {\bf e}_k , \ldots , {\bf e}_n \)
of V and let
\( {\bf e}_1^{\ast} , {\bf e}_2^{\ast} , \ldots , {\bf e}_k^{\ast} , \ldots , {\bf e}_n^{\ast} \)
be the dual basis in the dual space. Recall that the dual space consists of all linear forms on V or, equivalently,
of all functionals on V. To simplify notation, let z = ek*. Then
Consider the dual space U* spanned by
\( {\bf z}, T^{\ast} {\bf z} , \ldots , T^{\ast\, k-1} {\bf z} . \)
Since ψ(λ) is also the minimal polynomial of T*, the space U* is
T*-invariant. Now observe that if
\( U^{\ast} \cap U^{\perp} = \{ 0 \} \) and dim U* = k, then
\( V^{\ast} = U^{\ast} \oplus U^{\perp} , \) where U* and
\( U^{\perp} \) are T*-invariant (since dim \( U^{\perp} \) = n-k).
This in turn implies the desired decomposition \( V = U^{\perp\perp} \oplus U^{\ast\perp} = U \oplus W , \)
where \( U^{\perp\perp} = U \) and \( U^{\ast\perp} = W \)
are T-invariant.
Finally, we shall prove that \( U^{\ast} \cap U^{\perp} = \{ 0 \} \) and
dim U* = k simultaneously as follows. Suppose that
\( a_0 {\bf z} + a_1 T^{\ast} {\bf z} + \cdots + a_s T^{\ast s} {\bf z} \in U^{\perp} , \)
where \( a_s \ne 0 \) and \( 0 \le s \le k-1 . \) Then
This matrix has the characteristic polynomial \( \chi (\lambda ) = \left( \lambda -1 \right)^3 \)
while its minimal polynomial is \( \psi (\lambda ) = \left( \lambda -1 \right)^2 . \)
The matrix A has two linearly independent eigenvectors
Let U and V be one-dimensional subspaces generated by spans of vectors u and v, respectively.
The minimal polynomials of these vectors are the same:
\( \psi_u (\lambda ) = \psi_v (\lambda ) = \lambda -1 \) because
Each of these one-dimensional subspaces U and V are A-cyclic and they cannot form the direct sum of
\( \mathbb{R}^3 . \) We choose a vector \( {\bf z} = \left[ 7, -3, 1 \right]^{\mathrm T} , \)
which is perpendicular to each u and v. matrix A transfers z into the vector
\( {\bf A}\,{\bf z} = \left[ 125, 192, 60 \right]^{\mathrm T} , \) which is perpendicular to
neither u nor v. Next application of A yields
Hence, vectors z, Az, and A2z are linearly independent and
\( \mathbb{R}^3 \) is the direct sum of A-cyclic.
■
End of Example 4
Example 5: The infinite set of monomials \( \left\{ 1, x, x^2 , \ldots , x^n , \ldots \right\} \)
form a basis in the set of all polynomials.
■
End of Example 5
Annihilators and Direct Sums
Consider a direct sum decomposition of a vector space over field 𝔽:
\begin{equation} \label{EqDirect.1}
V = S \oplus T .
\end{equation}
Then any linear functional φT ∈ T* can be extended to a linear functional φ on V by setting φ(S) = 0. Let us call this extension by 0. Clearly, φ ∈ S⁰, the annihilator of S. Therefore, the mapping φT ⇾ φ is an iso,orphism from T* to S⁰, whose inverse is the restriction to T.
Theorem 2:
Let V = S ⊕ T be a direct decomposition of a vector space V. The extension by 0 map is an isomorphism from T* to S⁰, and so
\[
T^{\ast} \cong S^0 .
\]
If V is finite-dimensional, then
\[
\dim\left( S^0 \right) = \mbox{codim}\left( S \right) = \dim \left( V/S \right) = \dim V - \dim S .
\]
Example 6:
Let V be the vector
space over ℤ₂ with a countably infinite ordered basis ε = (e₁, e₂, e₃, …). We consider two its subspaces that are spanned on two distinct sets of basis elements: S = span{e₁} and T = span{e₂, e₃, …}. The annihilator of S is congruent to S⁰ ≌ T* ≌ V*.
End of Example 6
The annihilator provides a way to describe the dual space of a direct sum.
Theorem 3:
A linear functional on the direct sum V = S ⊕ T can be written
as a sum of a linear functional that annihilates S and a linear functional that annihilates T, that is,
\[
\left( S \oplus T \right)^{\ast} = S^0 \oplus T^0 .
\]
Clearly S⁰ ∩ T⁰ = {0} because any functional that annihilates both S and T must annihilate S ⊕ T
= V. Hence, the sum S⁰ + T⁰ is direct. We have
\[
V^{\ast} = \{ 0\}^0 = \left( S \cap T \right)^0 = S^0 + T^0 = S^0 \oplus T^0 .
\]
Alternatively, since φS + φT is the identity map, if φ ∈ V*, then we can write
\[
\phi = \phi \circ \left( \phi_S + \phi_T \right) = \left( \phi \circ \phi_S \right) + \left( \phi \circ \phi_T \right) \in S^0 \oplus T^0 ,
\]
so V ≌ S⁰ ⊕ T⁰.
Example 7:
End of Example 7
Axier, S., Linear Algebra Done Right. Undergraduate Texts in Mathematics (3rd ed.). Springer. 2015, ISBN 978-3-319-11079-0.