es

Subspaces

Homogeneous systems

Image & ker

Duality

Intersection

Direct sums

Invariant subspaces

Let V be a vector space over scalar field 𝔽 and T : VV a linear operator on V. If W is a subspace of V, we say that W is invariant under T if for each vector w in W the vector Tw is in W, i.e., if T(W) is contained in W.
We do not have any special notation for an invariant subspace, so it is important to recognize that an invariant subspace is always relative to both a superspace (V) and a linear transformation (T) or corresponding matrix A = ⟦T⟧, which will sometimes not be mentioned, yet will be clear from the context. Note also that the linear endomorphism involved must have an equal domain and codomain — the definition would not make much sense if our outputs were not of the same type as our inputs.    
Example 1: As usual, we begin with an example that demonstrates the existence of invariant subspaces. We will return later to understand how this example was constructed, but for now, just understand how we check the existence of the invariant subspaces.

If T is any linear operator on V, then V is invariant under T, as is the zero subspace. The image or range of T and the null space of T are also invariant under T.

Let T be the linear operator on ℝ² which is represented in the standard ordered basis by the matrix \[ \mathbf{A} = \begin{bmatrix} \phantom{-}0 & 1 \\ -1&0 \end{bmatrix} . \] Then the only subspaces of ℝ² which are invariant under T are ℝ² and the zero subspace. Any other invariant subspace would necessarily have dimension 1. However, if W is the subspace spanned by some non-zero vector x, the fact that W is invariant under T means that x is a characteristic vector, but A has no real characteristic values.    ■

End of Example 1
Theorem 1: Kernels of Powers are Invariant subspaces.
Suppose that T : VV is a linear operator. Then kernel ker(Tk), where k is a positive integer, is an invariant subspace of V.
Suppose that z ∈ ker(Tk). Then \[ T^k \left( T\,\mathbf{z} \right) = T^{k+1} \left( \mathbf{z} \right) = T \left( T^k \mathbf{z} \right) = T(\mathbf{0}) = \mathbf{0} . \] Hence, T(z) ∈ ker(Tk). Thus, ker(Tk) is an invariant subspace of V relative to T.
   
Example 2: Let \( \displaystyle \quad \texttt{D} = {\text d}/{\text d}x \quad \) be the differential operator acting in the space of all polynomials ℝ⟦x⟧ with real coefficients. Let n be a positive integer and let W≤n = ℝ≤nx⟧ be the finite dimensional subspace of all polynomial of degree less than or equal to n. This subspace Wn is invariant with respect to the derivative operator because \( \displaystyle \quad \texttt{D} \,:\) WnWn−1.

It is well-known from calculus that the derivative operator transfer all constants into zero. Therefore, the kernel of the differential operator coinsides with W₀ = ℝ=0x⟧ = ℝ, the space of scalars. The second power of the derivative operator, \( \displaystyle \quad \texttt{D}^2 = \frac{{\text d}^2}{/{\text d}x^2} \quad \) maps Wn into Wn−2Wn. Its kernel includes all polinomials of the first degree, which is ℝ≤1x⟧.

The pattern is clear: any power derivative span class="math">\( \displaystyle \quad \texttt{D}^m = \frac{{\text d}^m}{/{\text d}x^m} \quad \) has the kernel W≤m−1, which is an invariant subspace under the derivative operator.    ■
End of Example 2
Corollary 1: Kernel of a linear endomorphism is an invariant subspace.
It is sufficient to show that a kernel of any endomorphosm is a vector subspace. Let x belongs to the kernel of an endomorphism. so Tx = 0. Then λx also belongs to the kernel because Tx) = λT(x) = λ 0 = 0. If two vectors, x and y belong to the kernel, then their sum will also belong to this subspace since T(x + y) = T(x) + T(y) = 0 + 0 = 0.
   
Example 3: We consider a linear transformation that is represented in a particular basis by the matrix \[ {\bf A} = \begin{bmatrix} -152& 71& 22 \\ -332& 155& 48 \\ 22& -10& -3 \end{bmatrix} . \] Using Mathematica we findan the kernel of operator T, which is called null space in Linear Algebra:
A = ({-152, 71, 22}, {-332, 155, 48}, {22, -10, -3}} NullSpace[A]
{{-1, -4, 6}}
Therefore, the kernel of matrix A is spanned on one vector: W = span(1, 4, −6). We check that TAWW.
x = {1, 4, -6};
A . x
{0, 0, 0}
We check that W is also invariant subspace for A².
A2 = A . A; A2.x
0, 0, 0}
   ■
End of Example 3

It is possible to determine a wide class of invariant subspaces using the following approach. Let T : VV a linear endomorphism of V. We consider a linear operator on V that commutes with T, i.e., T U = U T. Let W be the image of U and let N be the null space of U. Both W = U(V) and N = ker(U) are invariant under T. If x is in the range of U, say x = Uy, then Tx = T(Uy) = U(Ty) so that Tx is in the range of U.

Simialary, if x is in the null space of U, so Ux = 0. Then U(Tx)) = T(Ux) = T(0) = 0. Hence, Tx is in N of U.

A particular class of operators that commute with T constitute polynomials (or analytic functions) of T, so U = g(T), where g is a polynomial. For instance, we might have U = T − λI, where c is a characteristic value of T. The null space of U is familiar to us. We see that this example includes the (obvious) fact that the space of characteristic vectors of T associated with the characteristic value c is invariant under T.    

Example 4: We consider a singular matrix \[ \mathbf{A} = \begin{bmatrix} 15& -6& 0 \\ 35& -14& 0 \\ 14& -6& 1 \end{bmatrix} . \] Then matrix \[ \mathbf{U} = \mathbf{A}^2 - 2\,\mathbf{A} + 3\,\mathbf{I} = \begin{bmatrix} -12& 6& 0 \\ -35& 17& 0 \\ -14& 6& 2 \end{bmatrix} . \] commutes with matrix A.
A = {{15, -6, 0}, {35, -14, 0}, {14, -6, 1}}; U = {{-12, 6, 0}, {-35, 17, 0}, {-14, 6, 2}}; A.U == U.A
True
Then the kernel of A is spanned on vector {2, 5, 2}, so W = span{2, 5, 2} is invariant subspace under A. On the other hand, one dimensional subspace W = span{2, 5, 2} is invariant with respect to operator U, as Mathematica shows.
U . {2, 5, 2}
{6, 15, 6}
   ■
End of Example 4
For a square matrix A ∈ 𝔽n×n, the eigenspace associated with an eigenvalue λ is the span of all eigenvectors corresponding to λ: \[ \mbox{Null Space of } \left(\lambda\mathbf{I} - \mathbf{A} \right) , \] where I is the identity matrix. We denote this space by ℰλ(A).

Let T :VV be a linear operator acting in a vector space V. Suppose further that for x0, (λI - T)k(x) = 0 for some positive integer k. Then x is a generalized eigenvector of T with eigenvalue λ. The generalized eigenspace of T for λ is \[ 𝒢_{\lambda} (T) = \left\{ \mathbf{x} \in V \ : \quad \left( \lambda\,I - T \right)^k (\mathbf{x}) = \mathbf{0} \quad \mbox{for some integer $k > 0$} \right\} . \]
Theorem 2: Eigenspaces are invariant subspaces.
Suppose that T : VV is a linear endomorphism with eigenvalue λ and associated eigenspace ℰλ(T). Let W be any subspace of ℰλ(T). Then W is an invariant subspace of V relative to T.
Choose wW. Then \[ T(\mathbf{w}) = \lambda\,\mathbf{w} \in W . \] Therefore, ℰλ(T) is an invariant subspace of V relative to T.
    It is not always the case that any subspace of an invariant subspace is again an invariant subspace---the whole vector space (which is an invariant subspace for any linear endomorphosm) provides a counter example; However, eigenspaces do have this property. Here is an example of the theorem, which also allows us to build several several invariant subspaces.    
Example 5: Let us consider the matrix \[ \mathbf{A} = \begin{bmatrix} -16141& -31432& 8820& 11144& -28530& 4914 \\ 15816& 30797& -8641& -10919& 27953& -4815 \\ -6292& -12252& 3438& 4343& -11121& 1915 \\ 24852& 48392& -13580& -17157& 43924& -7566 \\ -4520& -8800& 2468& 3120& -7987& 1376 \\ -23164& -45104& 12656& 15992& -40940& 7053 \end{bmatrix} \] This matrix generates a linear operator T = TA : ℝ6×1 ⇾ ℝ6×1 acting in six dimensional vector space of column vectors \[ T\,\mathbf{x} = \mathbf{A}\,\mathbf{x} \in \mathbb{R}^{6\times 1} \quad \mbox{ for any }\quad \mathbf{x} \in \mathbb{R}^{6\times 1} . \] Using Mathematica, we find its right eigenvalues
A = {{-16141, -31432, 8820, 11144, -28530, 4914}, {15816, 30797, -8641, -10919, 27953, -4815}, {-6292, -12252, 3438, 4343, -11121, 1915}, {24852, 48392, -13580, -17157, 43924, -7566}, {-4520, -8800, 2468, 3120, -7987, 1376}, {-23164, -45104, 12656, 15992, -40940, 7053}}
Eigenvalues[A]
{2, -1, -1, 1, 1, 1}
and corresponding eigenvectors
Eigenvectors[A]
{{-4, -1, -3, 4, 4, 0}, {3, -4, -2, -8, 0, 6}, {0, -1, 7, 7, 6, 0}, {104, -49, -40, 0, 0, 100}, {-14, -41, -10, 0, 50, 0}, {148, -63, -80, 100, 0, 0}}
We consider a subspace spanned on eigenvectors corresponding eigenvalue λ₁ = 1: \[ W = \mbox{span} \left\{ \begin{pmatrix} 104 \\ -49 \\ -40 \\ 0 \\ 0 \\ 100 \end{pmatrix} , \ \begin{pmatrix} -14 \\ -41 \\ -10 \\ 0 \\ 50 \\ 0 \end{pmatrix} , \ \begin{pmatrix} 148 \\ -63 \\ -80 \\ 100 \\ 0 \\ 0 \end{pmatrix} \right\} . \] With Mathematica, we verify that every generator of subspace W remains in this subspace upon action by map TA.
A . {104, -49, -40, 0, 0, 100}
{104, -49, -40, 0, 0, 100}
A . {-14, -41, -10, 0, 50, 0}
{-14, -41, -10, 0, 50, 0}
A . {148, -63, -80, 100, 0, 0}
{148, -63, -80, 100, 0, 0}
The two dimensional subspace \[ U = \mbox{span} \left\{ \begin{pmatrix} 104 \\ -49 \\ -40 \\ 0 \\ 0 \\ 100 \end{pmatrix} , \ \begin{pmatrix} -14 \\ -41 \\ -10 \\ 0 \\ 50 \\ 0 \end{pmatrix} \ \right\} = \left\{ \left. \begin{pmatrix} a\,104 -14\,b \\ -a\,49 -41\,b \\ -a\,40 -10\,b \\ 0 \\ 50\,b \\ a\,100 \end{pmatrix} \ \right\vert \ a, b \in \mathbb{R} \right\} . \] is TA invariant.

Matrix A also generates an operator S : ℝ1×6 ⇾ ℝ1×6 in the space of row vectors by \[ \mathbb{R}^{1\times 6} \ni \mathbf{x} \,\mapsto \,\mathbf{x}\,\mathbf{A} \in \mathbb{R}^{1\times 6} . \] With Mathematica, we find its left eigenvalues

Eigenvalues[Transpose[A]]
{2, -1, -1, 1, 1, 1}
and corresponding eigenvectors
Eigenvectors[Transpose[A]]
{{-1140, -2220, 623, 787, -2015, 347}, {-14, -204, 856, 577, 0, 257}, {143, 248, 68, -1, 257, 0}, {-10, 28, 4, 0, 0, 25}, {30, 31, -17, 0, 25, 0}, {90, -7, -51, 50, 0, 0}}
So we see that eigenvalues are the same as for operator TA, but eigenvectors are diferent. WE define a subspace spanned on three eigenvectors corresponding to characteristic value λ₁ = 1: \begin{align*} W &= \mbox{span} \left\{ \left( -10, 28, 4, 0, 0, 25 \right) , \right. \\ \left. \left( 30, 31, -17, 0, 25, 0 \right) , \left( 90, -7, -51, 50, 0, 0 \right) \right\} . \end{align*}    ■
End of Example 5
Theorem 3: Generalized Eigenspace is an invariant subspace.
The conclusion of this theorem is a set equality, so we will show the conclusion by establishing two set inclusions. First, suppose that x belongs to the generalized eigenspace 𝒢λ(T). Then there is apositive integer k such that (λIT)k(x) = 0. This is equivalent to the statement that x belongs to the kernel of (λIT)k. No matter what the value of k is, Theorem 1 gives \[ \mathbf{x} \in \mbox{ker}\left( (\lambda\mathbf{I} - T)^k \right) \subseteq \mbox{ker}\left( (\lambda\mathbf{I} = T)^k \right) . \] Hence, 𝒢λ(T) ⊆ ker((λIT)n), where n is the dimension of the vector space V.

or the opposite inclusion, suppose y &isin' ker((λIT)n). Then ((λIT)n)(y) = 0, so y belongs to 𝒢λ(T) and thus 𝒢λ(T) ⊆ ker((λIT)n).

   
Example 6: We consider the matrix \[ {\bf A} = \begin{bmatrix} -1508& 3414& -1626& 1561 \\ 4462& -10094& 4808& -4619 \\ 5157& -11669& 5559& -5339 \\ -5844& 13220& -6296& 6049 \end{bmatrix} . \] This matrix generates a linear endomorphism T = TA : ℝ4×1 ⇾ ℝ4×1 by multiplying column vectors by matrixA from left. Using Mathematica, we find right eigenvalues and corresponding eigenvectors (including generaized eigenvectors).
A = {{-1508, 3414, -1626, 1561}, {4462, -10094, 4808, -4619}, {5157, -11669, 5559, -5339}, {-5844, 13220, -6296, 6049}} ;
Eigenvalues[A]
{2, 2, 1, 1}
Eigenvectors[A]
{{-2, -7, -9, 4}, {0, 0, 0, 0}, {14, 20, 29, 0}, {0, 0, 0, 0}}
LinearSolve[2*IdentityMatrix[4] - A, {-2, -7, -9, 4}]
{-(7/2), -(5/2), -2, 0}
LinearSolve[IdentityMatrix[4] - A, {14, 20, 29, 0} ]
{387/29, -(81/29), 0, 19}
This 4×4 matrix has two right eigenvalues λ₁ = 1 and λ₂ = 2. To the former corresponds eigenvector (14, 20, 29, 0) and the generalized eigenvector (387, -81, 0, 551). To λ₂ corresponds eigevector (2, 7, 9, −4) and the generalized eigevector {7, 5, 4, 0).

Using Mathematica, we verify that span of these two vectors (eigenvector and generalized eigenvector) form invariant subspace.

A = {{-1508, 3414, -1626, 1561}, {4462, -10094, 4808, -4619}, {5157, -11669, 5559, -5339}, {-5844, 13220, -6296, 6049}} ;
B1 = IdentityMatrix[4] - A; B2 = 2*IdentityMatrix[4] - A;
v1 = {14, 20, 29, 0}; v2 = {{2, 7, 9, -4}; u1 = {387, -81, 0, 551}; u2 = {7, 5, 4, 0} ;
A . v1
{14, 20, 29, 0}
A . v2/2
{-2, -7, -9, 4}
B1 . u1/29
{14, 20, 29, 0
B2 . u2
{7, 5, 4,
Let us consider a subspace generated by eigenvectors corresponding λ₁ = 1 (whch is known as eigenspace) \[ W = \mbox{span} \left\{ \begin{pmatrix} 14\\ 20 \\ 29 \\ 0 \end{pmatrix} , \ \begin{pmatrix} 387 \\ 81 \\ 0 \\ 551 \end{pmatrix} \right\} = \mbox{span} \left\{ \mathbf{v}_1 , \ \mathbf{u}_1 \right\} . \] Note that we wrote vectors as columns because matrix A acts on vectors from left. We check that TAWW. It is suffient to consider only generating vectors, v₁ and u₁. Previous calculations show that \[ \mathbf{A}\,\mathbf{v}_1 = \mathbf{v}_1 \qquad\mbox{and} \qquad \mathbf{A}\,\mathbf{u}_1 = \mathbf{u}_1 -29\, \mathbf{v}_1 . \]
A . u1 + 29 v1 - u1
{0, 0, 0, 0}
We check that W is also invariant subspace for A².
A2 = A . A;
A2 . v1
{14, 20, 29, 0}
A2 . u1 - u1 + 2*29*v1
{0, 0, 0, 0}
So we get the relation \[ \mathbf{A|^2 \mathbf{v}_1 = \mathbf{v}_1 , \qquad \mathbf{A|^2 \mathbf{u}_1 = \mathbf{u}_1 -58\mathbf{v}_1 . \]

Matrix A also generates the operator R = RA : ℝ1×4 ⇾ ℝ1×4 acting on row vectors as \[ \mathbb{R}^{1\times 4} \ni \mathbf{x} \ \mapsto \ \mathbf{x}\,\mathbf{A} \in \mathbb{R}^{1\times 4} . \] Using Mathematica, we find left eigenvalues and corresponding eigenvectors

Eigenvalues[Transpose[A]]
{2, 2, 1, 1}
Eigenvectors[Transpose[A]]
{{-113, 256, -122, 117}, {0, 0, 0, 0}, {-260, 588, -280, 269}, {0, 0, 0, 0}}
So we see that matrix A has the same left eigenvalues λ₁ = 1 and λ₂ = 2. However, the corresponding row eigenvectors are different from column vectors: \[ \mathbf{v}_1 = \left[ -113, 256, -122, 117 \right] , \qquad \mathbf{v}_2 = \left[ -260, 588, -280, 269 \right] . \] Now we find the geenralized eigenvectors by solving linear systems: \[ \left( \mathbf{I} - \mathbf{A} \right) \mathbf{u}_1 = \mathbf{v}_1 , \qquad \left( 2\mathbf{I} - \mathbf{A} \right) \mathbf{u}_2 = \mathbf{v}_2 \]
v1 = {-260, 588, -280, 269};
LinearSolve[IdentityMatrix[4] - Transpose[A], v1]
{-(17/269), 55/269, -(39/269), 0}
u1 = %*269
{-17, 55, -39, 0}+
v2 = {-113, 256, -122, 117};
LinearSolve[2*IdentityMatrix[4] - Transpose[A], v2]
{9/26, 43/26, -(17/13), 0}
u2 = %*26
{9, 43, -34, 0}
Hence the corresponding generalized eigenvectors are \[ \mathbf{u}_1 = \left[ -17, 55, -39, 0 \right] , \qquad \mathbf{u}_2 = \left[ 9, 43, -34, 0 \right] . \]    ■
End of Example 6

Theorem 1 can be improved.

Theorem 4: Let T : VV be a linear operator in finite dimensional vector space V, with n = dim(V). Then there exist an integer m, 0 ≤ mn, such that \begin{align*} \{ 0 \} &= \mbox{ker}\left( T^0 \right) = \mbox{ker}\left( I \right) \subseteq \mbox{ker}\left( T^1 \right) \subseteq \mbox{ker}\left( T^2 \right) \subseteq \\ & \quad \cdots \subseteq \mbox{ker}\left( T^m \right) = \mbox{ker}\left( T^{m+1} \right) = \cdots . \end{align*}
There are several items to verify in the conclusion as stated. First, we show that ker(Tk) ⊆ ker(Tk+1) for any k. Choose z ∈ ker(Tk). Then \[ T^{k+1}(\mathbf{z}) = T \left( T^k (\mathbf{z}) \right) = T(\mathbf{0}) = \mathbf{0} . \] Hence, z ∈ ker(Tk+1) and we concluse that ker(Tk) ⊆ ker(Tk+1).

Second, we demonstrate the existence of a power m where consecutive powers result in equal kernels. A by-product will be the condition that m can be chosen so that mn. To the contrary, suppose that \begin{align*} \{ 0 \} = \mbox{ker}\left( T^0 \right) = \mbox{ker}\left( I \right) &\subseteq \mbox{ker}\left( T^1 \right) \subseteq \mbox{ker}\left( T^2 \right) \subseteq \cdots \\ &\subseteq \mbox{ker}\left( T^n \right) \subseteq \mbox{ker}\left( T^{n+1} \right) \subseteq \cdots . \end{align*} Since ker(Tk) ⊆ ker(Tk+1), we conclude that dim(ker(Tk)) ≤ dim(ker(Tk+1)), and we get dim(ker(Tk+1))≥ dim(ker(Tk)) + 1. Repeated application of this observation yields \begin{align*} \dim\left( \mbox{her}\left( T^{n+1} \right) \right) &\ge \dim\left( \mbox{her}\left( T^{n} \right) \right) +1 \ge \dim\left( \mbox{her}\left( T^{n-1} \right) \right) +2 \\ &\quad \cdots \ge n+1 . \end{align*} Thus, ker(Tn+1) has a basis of size at least n+1, which is a linearly independent set of size greater than n in the vector space V of dimension n.

This contradiction yields the existence of an integer k such that ker(Tk) = ker(Tk+1), so we can define m to be smallest such integer with this property. From the argument above about dimensions resulting from a strictly increasing chain of subspaces, it should be clear that mn.

It remains to show that once two consecutive kernels are equal, then all of the remaining kernels are equal. More formally, if ker(Tm) = ker(Tm+1), then ker(Tm) = ker(Tm+j) for all j ≥ 1. We will give a proof by induction on j. The base case (j=1) is precisely our defining property for m.

In the induction step, we assume that ker(Tm) = ker(Tm+j) and endeavor to show that ker(Tm) = ker(Tm+j+1). At the outset of this proof, we established that ker(Tm) ⊆ ker(Tm+j). So Definition of inductive proof requires only that we establish the subset inclusion in the opposite direction. To estiblish it, choose z ∈ ker(Tm+j+1). Then \begin{align*} \mathbf{0} &= T^{m+j+1}(\mathbf{z} \\ &= T^{m+j} \left( T(\mathbf{z}) \right) = T^{m} \left( T(\mathbf{z}) \right) \\ &= T^{m+1} (\mathbf{z}) = T^m (\mathbf{z}) . \end{align*} So z belongs to ker(Tm).

   
Example 7: Det[{{3, 3, 1, -1, 4}, {1, 2, 0, 1, 5}, {3, 1, 2, 4, 0}, {6, 4, 3, 5, 2}, {5, 0, 4, 2, 1}}] = 2
matrix = RandomInteger[{-5, 5}, {6, 6}] % // MatrixForm
   ■
End of Example 7
Theorem 5: Let T : VV be a linear operator in finite dimensional vector space V, with n = dim(V) and λ be an eigenvalue of T. Then 𝒢λ(T) = ker((λIT)n).
The conclusion of this theorem is a set equality, so we will apply Definition SE by establishing two set inclusions. First, suppose that x ∈ G𝒢λ(T). Then there is an integer k such that (T−λIV)k(x)=0. This is equivalent to the statement that x∈K((T−λIV)k). No matter what the value of k is, Theorem KPLT gives
   
Example 8:    ■
End of Example 8

Restrictions of transformations

When the subspace UV is invariant under the operator T : VV, then T induces a linear operator TU on the subspace U.
Let T : VV be a linear operator in finite dimensional vector space V, and U be an invariant subspace of V relative to T. Define restriction of T to U by \[ \left. T\right\vert_U : U \mapsto U , \qquad T_U (\mathbf{x}) = T(\mathbf{x}) \quad \forall \mathbf{x} \in U . \]

Actually, restricted operator TU is quite a different object from T because its domain is U not VU. When V is finite-dimensional, the invariance of U under operator T has a simple matrix interpretation. Suppose we choose an ordered basis α = [e₁, e₂, … , en] for V such that β = [e₁, e₂, … , er] is an ordered basis for U (r = dimUn). Let A = ⟦T⟧ so that

\[ T\left( \mathbf{e}_j \right) = \sum_{i=1}^n a_{i,j} \mathbf{e}_i . \]
Since U is invariant under T, the vector Tej belongs to U for jr. This means that
\[ T\left( \mathbf{e}_j \right) = \sum_{i=1}^r a_{i,j} \mathbf{e}_i , \qquad j=1,2,\ldots , r . \]
Since entries 𝑎i,j of matrox A are zeroes for jr and i > r, A has the block form
\begin{equation} \label{EqInv.1} \mathbf{A} = \begin{bmatrix} \mathbf{A}_r & \mathbf{B} \\ \mathbf{0} & \mathbf{A}_{n-r} \end{bmatrix} , \end{equation}
where Ar is an r-by-r matrix, B is an r × (n - r) matrox, 0 is zero matrix of dimensions (n - r) × r, and An-r is an (n - r)-by-(n - r) matrix. It should be noted that Ar is precisely the matrix of the induced operator TU in the ordered basis β. It will be proved in Part 6 that matrix A can be made block diagonal (so B = 0 in equation above) when invariant subspaces are generated by eigenvalues or generalized eigenvalues.

The block form of matrix A = ⟦T⟧ in Eq.\eqref{EqInv.1} allows us to make an important observarion regarding connection of two transformations, T and TU.

Lemma 1: Let U be an invariant subspace for T : VV. The characteristic polynomial for the restriction operator TU divides the characteristic polynomial for T. The minimal polynomial for TU divides the minimal polynomial for T.
We have \[ \mathbf{A} = \begin{bmatrix} \mathbf{A}_r & \mathbf{B} \\ \mathbf{0} & \mathbf{A}_{n-r} \end{bmatrix} , \tag{1} \] where A = ⟦Tα and Ar = ⟦Tβ. Because of the block form of the matrix, \[ \det\left( \lambda\mathbf{I} - \mathbf{A} \right) = \det\left( \lambda\mathbf{I} - \mathbf{A}_r \right) \,\det\left( \lambda\mathbf{I} - \mathbf{A}_{n-r} \right) . \] That proves the s tatement about characteristic polynomials. Notice that we used I to represent identity matrices of three different sizes.

The kth power of the matrix A has the block form \[ \mathbf{A}^k = \begin{bmatrix} \mathbf{A}_r^k & \mathbf{B}_k \\ \mathbf{0} & \mathbf{A}_{n-r}^k \end{bmatrix} , \] where Bk is some r × (n−r) matrix. Therefore, any polynomial which annihilates A also annihilates Ar (and An-r too). So, the minimal polynomial for Ar divides the minimal polynomial for A.

   
Example 9:    ■
End of Example 9

 


  1. Axler, Linear Algebra done Right, Speinger,
  2. Beezer, R.A., A Second Course in Linear Algebra, 2017.
  3. Gohberg, I., Lancaster, P., and Rodman, L., Invariant Subspaces of Matrices with Applications, SIAM,
  4. Invariant Subspaces