es

Subspaces

Image & ker

Duality

Intersection

Invariant subspaces

There are two reasons to use the sum of two vector spaces. One of them is the way to build new vector spaces from old ones. Another reason is to decompose the known vector space into sum of two (smaller) spaces. Since we consider linear transformations between vector spaces, these sums lead to representations of these linear maps and corresponding matrices into forms that reflect these sums. In many very important situations, we start with a vector space V and can identify subspaces “internally” from which the whole space V can be built up using the construction of sums. However, the most fruitful results we obtain for a special sum, called the direct sum.

Products and Sums

There is a strong connection between products and suns of vector spaces. Before exploring the topic. let us remind some information that is related to this section. For any two sets A and B, their Cartesian product consists of all ordered pairs (𝑎, b) such that 𝑎 ∈ A and bB,
\[ A \times B = \left\{ (a,b)\,:\ a \in A, \quad b\in B \right\} . \]
If sets A and B carry some algebraic structure, as in our case, they are vector spaces, then we can define a suitable structure on the product set as well. So, direct product is like Cartesian product, but with some additional structure. In case of vector spaces, we equip the product with addition operation
\[ \left( a_1 , b_1 \right) + \left( a_2 , b_2 \right) = \left( a_1 + a_2 , b_1 + b_2 \right) \]
and scalar multiplication
\[ k \left( a , b \right) = \left( k\,a , k\,b \right) , \qquad k \in \mathbb{F}. \]
Here 𝔽 is a field of scalars (either ℚ, rational numbers, or ℝ, real numbers, or ℂ, complex numbers). It is a custom to denote the direct product of two or more scalar fields as 𝔽² = 𝔽×𝔽 or, more generally, 𝔽n.

Let A and B be nonempty subsets of a vector space V. The sum of A and B, denoted A + B, is the set of all possible sums of elements from both subsets: \( A+B = \left\{ a+b \, : \, a\in A, \ b\in B \right\} = \mbox{span}(A \cup B) . \)

This definition is naturally extended on finite number of subsets X₁, X₂, … , Xn of vector space V. Now we go to our main topic, sums of subspaces.

Sums of Subspaces

Theorem 1: Let W₁, W₂, … , Wn be subspaces of a vector space V over field 𝔽, then their sum W₁ + W₂ + ⋯ + Wn = {w₁ + w₂ + ⋯ + wn | w₁ ∈ W₁, w₂ ∈ W₂, … , wnWn } is a sub- space of V and it is the smallest subspace of V containing all subspaces W₁, W₂, … , Wn.
Since W₁, W₂, … , Wn are subspaces of V and zero vector belongs to each subspace, then \[ 0 = 0 + 0 + \cdots + 0 \in W_1 + W_2 + \cdots + W_n . \] Now let v, wW₁ + W₂ + ⋯ + Wn and r ∈ 𝔽. Then v = v₁ + v₂ + ċ + vn and w = w₁ + w₂ + ċ + wn, where vi, wiWi for all i = 1, 2, … , n. As each Wi is a subspace of V, each partial sum vi + wi belongs to Wi. Hence, their sum is \[ \mathbf{v} + \mathbf{w} = \sum_{i=1}^n \mathbf{v}_i + \mathbf{w}_i \in W_1 + \cdots + W_n . \] Similarly, rviWi for all i = 1, 2, … , n. So, scalar multiple becomes \[ r\,\mathbf{v} = \sum_{i=1}^n r\,\mathbf{v}_i \in W_1 + \cdots + W_n . \] Therefore, W₁ + W₂ + ⋯ + Wn is a subspace of V.

Now to prove that W₁ + W₂ + ⋯ + Wn is the smallest subspace containing W₁, W₂, … , Wn, we will show that any subspace of V containing W₁, W₂, … , Wn contains the sum W₁ + W₂ + ⋯ + Wn.

Let W be any subspace containing W₁, W₂, … , Wn. Let w = w₁ + w₂ + ċ + wn ∈ WW₁ + W₂ + ⋯ + Wn, where wiWi for all i = 1, 2, … , n. Since W is a subspace of V and W contains the sum W₁ + W₂ + ⋯ + Wn, wW.

   
Example 1: Let V = ℝ². Let us consider W₁ = { (x, y) : x = 2y, x, y ∈ ℝ } and W₂ = { (x, y) : x = −2x, x, y ∈ ℝ }. Then W₁, W₂ are subspaces of V.
RJB Fig 2.6 on page 68 of A Course in LinAlg
Subspace W
     
Subspace W

Any vector . (x, y) ∈ ℝ² can be written as a linear combination of elements of W₁ and . W₂ as follows: \[ \mathbb{R}^2 \ni (x, y) = \left( \frac{x+ 2\,y}{2} , \ \frac{x + 2\,y}{4} \right) + \left( \frac{x - 2\,y}{2} , \ \frac{2\,y -x}{4} \right) \in W_1 + W_2 . \] As W₁ + W₂ is a two-dimensional subspace of ℝ², this implies that W₁ + W₂ = Ropf;². Also observe that the representation of any vector as the sum of elements in W₁ and W₂ is unique here.    ■
End of Example 1
Let V be alinear space over some field 𝔽 that contains a family of subspaces { Xi }i∈I, where I is some set (finite or not) of indexes. This family of subspaces {Xi} is called independent, when a finite sum \( \displaystyle \quad \sum \mathbf{x}_j \quad \) of elements xjXj, can vanish only if xj = 0 for all jI.
As we shall mainly be concerned with finite families of subspaces, we shall restrict the index set to be finite, or ℕ. This is only a notational simplification.

Theorem 1A: Let { Xi }i≥0 be an independent family of subspaces of V. Choose subsets Si c Xi (i ≥ 0). If each Si is linearly independent, then the union ∪i≥0Si c V is also linearly independent.
Let S₀ = {ei : iI₀} ⊂ X₀, S₁ = {εj : jI₁} ⊂ X₁ , … be linearly independent subsets. Then the union of these sets is also linearly independent in V. Consider any linear relation \[ \sum_i a+i \mathbf{e}_i + \sum_j b_j \varepsilon_j + \cdots = 0 \] (finitely many components, each one containing at most finitely many nonzero elements). By definition of independence of the subspaces Xi \[ \mathbf{x}_0 = 0 , \quad \mathbf{x}_1 = 0 , \quad \ldots . \] By linear independence of {ei} in X₀, the first equality \( \displaystyle \quad \mathbf{x}_0 = \sum a_i \mathbf{e}_i = 0 \quad \) implies that all 𝑎i are zero. By linear independence of { εj } in X₁, the second equality \( \displaystyle \quad \mathbf{x}_1 = \sum b_j \varepsilon_j = 0 \quad \) bj are zero, and so on.
   
Example 1A: It is important for every country to conduct a statistical study of their population and its growth. Let N(t) bethe total population at time t, which can be considered as a positive real number ratherthan an integer for simulation purposes.A more informative information presents a list of the partial numbers ni(t) in defferent age groups (1 \le i \le; m) \[ N(t) = n_1 (t) + n_2 (t) + \cdots + n_m (t) . \] This data is an m-tuple, hence a generalized vector. A rough but useful separation of the population into three groups \[ N(t) = n_1 (t) + n_2 (t) + n_3 (t) , \] where n₁(t) is the number of people of at most 21 years old, n₂(t) s the number of mature adults, and n₃(t) represents the number of seniors. However, official statistical data use a finer partition with 20 groups of 5 years each. (They even separate men and women, single, married, divorced, etc. thus achieving a quite large matrix of data.

In 1945, P.H. Leslie published the paper on how matrices could be used to predict the evolution of populations. Patrick Holt Leslie (1900---1972) nickname "George" was a Scottish physiologist best known for his contributions to population dynamics, including the development of the Leslie matrix, a mathematical tool widely used in ecological and demographic studies.

The Leslie matrix model requires that the interval between consecutive observations have the same length as each age group. Let \[ {\bf n}(0) = \begin{pmatrix} n_ (0) \\ n_2 (0) \\ \vdots \\ n_m (0) \end{pmatrix} \] be the initial age distribution vector displaying the number of humans in each age group at time t₀ = 0 and let n(tk) be the number of people in each group at time tk, i.e., the age distribution vector at time tk.

During a 5-year time span, it is expected to have deaths, births and aging in each age group. Hence, for i = 1, 2, … n, let bi denote the expected number of humans born to the age group i between the times tk and tk+1, and let sis be the proportion of people in the group i at time tk that are expected to be in the group i+1 at time tk+1.

It follows that \[ n_1 (t_{k+1}) = n_1 (t_k )\,b_1 + n_2 (t_k )\,b_2 + \cdots + n_m (t_k )\,b_m , \] and for i = 2, 3, … , \[ n_i (t_{k+1}) = s_{i-1}n_{i-1} (t_k ). \] This leads to the Leslie matrix models \[ \mathbf{L} = \begin{bmatrix} b_1 & b_2 &b_3& \cdots & b_m \\ s_1 &0&0& \cdots & 0 \\ 0& s_2 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0&0& \cdots & s_{m-1}&0 \end{bmatrix} . \] Now we can rewrite the Lesley equation in succinct matrix/vector form \[ {\bf n}(t_{k+1}) = \mathbf{L}\,\mathbf{n}(t_k ) \] and, in general, we have for k = 0, 1, 2, … , \[ {\bf n}(t_{k+1}) = \mathbf{L}^k \mathbf{n}(0) . \]    ■

End of Example 1A
Corollary 1A: Let { Xi}i≥0 be an independent family of nonzero vector subspaces in a finite-dimensional space V. Then this family is finite and \[ \sum_i \dim (X_i ) \leslant n = \dim V . \]
The union of bases βiXi is a linearly independent subset of V. Hence V has at most n elements.
   
Example 2A:    ■
End of Example 2A

Direct Sums

Let α be a set of generators (meaning that span(α) = A) of subspace AV and β be a set of generators of subspace BV. Their union α ∪ β generates the sum, A + B. If xA, yB, and wAB, then

\[ \mathbf{x} + \mathbf{y} = \underbrace{(\mathbf{x} + \mathbf{w})}_{\in A} + \underbrace{(\mathbf{y} + \mathbf{w})}_{\in B} , \]
shows that the elements of A +B may have several representations as sums x + y.

Now we come to a particular but very important case of sums of subspaces when every vector from a vector space V can be uniquely decomposed into sum of vectors from given subspaces.

A vector space V is called the direct sum of V₁ and V₂ if V₁ and V₂ are subspaces of V such that \( V_1 \cap V_2 = \{ 0 \} \) and \( V_1 + V_2 = V. \) This means that every vector v from V is uniquely represented via sum of two vectors \( {\bf v} = {\bf v}_1 + {\bf v}_2 , \quad {\bf v}_1 \in V_1 , \ {\bf v}_2 \in V_2 . \) We denote that V is the direct sum of V₁ and V₂ by writing \( V = V_1 \oplus V_2 . \)

The symbol ⊕, which is a plus sign inside a circle, serves as a reminder that we are dealing with a special type of sum of subspaces— each element in the direct sum can be represented only one way as a sum of elements from the specified subspaces.

Let X₁, X₂, … , Xn be subspaces of the vector space V. We say that X₁, X₂, … , Xn are independent if \[ \mathbf{x}_1 + \mathbf{x}_2 + \cdots + \mathbf{x}_n = \mathbf{0}. \qquad \mathbf{x}_i \in X_i , \] implies that each xi (i = 1, 2, … , n) is O.

For n = 2, the meaning of independence is zero intersection, i.e., X₁ and X₂ are independent if and only if X₁ ∩ X₂ = {0}. If n > 2, the independence of subspaces X₁, X₂, … , Xn says much more than X₁ ∩ X₂ ∩ ⋯ ∩ Xn = {0}. It means that each subspace Xi intersects the sum of the other subspaces only in the zero vector.

Since in vector spaces uniqueness expansion v = x₁ + x₂ + ⋯ + xn is equivalent to theindependence of subspaces, we can make the following

Observation: Let V be a finite-dimensional vector space, and let X₁, X₂, … , Xn be its subspaces. Then sum X = X₁ + X₂ + ⋯ + Xn is direct if and only is the subspaces X₁, X₂, … , Xn are independent.

Theorem 8: Suppose that V₁, V₂, … , Vn are subspaces of V. Define a linear map Γ : V₁ × V₂ × ⋯ × VnV₁ + V₂ + ⋯ _ Vn by \[ \Gamma \left(\mathbf{v}_1 , \mathbf{v}_2 , \ldots , \mathbf{v}_n \right) = \mathbf{v}_1 + \mathbf{v}_2 + \cdots + \mathbf{v}_n . \] Then V₁ + V₂ + ⋯ _ Vn is a direct sum if and only if Γ is injective.

Remark:    Since the map Γ is surjective by the definition of sums of vector spaces, the last word in this theorem could be replaced from “injective” to “invertible”.
The linear map Γ is injective if and only if the only way to write zero vector as a sum 0 = v₁ + v₂ + ⋯ + vn, where viVi (i =, 2, … , n) is by taking each vi equal to 0. Thus, Theorem 2 shows that Γ is injective if and only if V₁ + V₂ + ⋯ _ Vn is a direct sum, as desired.
   
Example 2: Let V = ℝ2×2 be the vector space of 2-by-2 matrices with real entries. Let us consider two its subspaces \[ W_1 = \left\{ \begin{bmatrix} a_{1,1} & a_{1,2} \\ 0 & a_{2,2} \end{bmatrix} \ : \quad a_{1,1} , a_{1,2}, a_{2.2} \in \mathbb{R} \right\} \] and \[ W_2 = \left\{ \begin{bmatrix} a_{1,1} & 0 \\ a_{2,1} & a_{2,2} \end{bmatrix} \ : \quad a_{1,1}, a_{2,1}, a_{2,2} \in \mathbb{R} \right\} . \] Then W₁ and W₂ are subspaces of V. Indeed, W₁ contains only upper triangular matrices and W₂ contains lower triangular matrices. Any sum of upper (or lower) triangular matrices is again upper (lower) triangular matrix. Also multiplication by a scalar does not ulter elements from i>W₁ and W₂, respectively.

Also any 2-by-2 matrix from V can be expressed as a sum of elements in W₁ and W₂. However, this expression is not unique. For example, \[ \begin{bmatrix} 3&4 \\ 1& 2 \end{bmatrix} = \begin{bmatrix} 3&4 \\ 0& 2 \end{bmatrix} + \begin{bmatrix} 0&0 \\ 1&0 \end{bmatrix} \] and \[ \begin{bmatrix} 3&4 \\ 1& 2 \end{bmatrix} = \begin{bmatrix} 0&4 \\ 0&0 \end{bmatrix} + \begin{bmatrix} 3&0 \\ 1&2 \end{bmatrix} . \]    ■

End of Example 2
A complement to a subspace of a vector space is another subspace which forms a direct sum. Two such spaces are mutually complementary. , that is: Equivalently, every element of V can be expressed uniquely as a sum of an element of U and an element of W.
   
Example 7: Let ℭ[−π, π] denote the class of all real-valued continuous functions on closed interval [−π, π]. This is a vector space (with respect to regular addition and scalar multiplication), but infinite dimensional one. We introduce two subspaces: \[ W_e = \left\{ f(x) \ : \ f(x) = f(-x) \right\} \] and \[ W_o = \left\{ f(x) \ : \ f(x) = -f(-x) \right\} \] We and Wo are respectively the collection of all odd functions and even functions. They are subspaces of ℭ[−π, π]. For any f ∈ ℭ[−π, π], consider \[ f_1 (x) = \frac{f(x) - f(-x)}{2} , \qquad f_2 (x) = \frac{f(x) + f(-x)}{2} . \] Then \[ f_1 (-x) = \frac{f(-x) - f(x)}{2} = - f_1 (x) , \] and \[ f_2 (-x) = \frac{f(-x) + f(x)}{2} = f_2 (x) . \] Thus, f₁ ∈ Wo and f₂ ∈ We. Clearly, f = f ₁ + f ₂ and hence ℭ[−π, π] = We + Wo. Also observe that WeWo = {0}. For if fWeWo, then f(−x) = −f(x) = f(x) for all x ∈ [−π, π]. This gives f(x) ≡ 0 for all x ∈ [−π, π]. Thus, we can conclude that ℭ[−π, π] = WeWo.    ■
End of Example 7
    Suppose that an n-dimensional vector space V is the direct sum of two subspaces \( V = U\oplus W . \) Let \( {\bf e}_1 , {\bf e}_2 , \ldots , {\bf e}_k \) be a basis of the linear subspace U and let \( {\bf e}_{k+1} , {\bf e}_{k+2} , \ldots , {\bf e}_n \) be a basis of the linear subspace W. Then \( {\bf e}_1 , {\bf e}_2 , \ldots , {\bf e}_n \) form a basis of the whole linear space V. Any linear transformation written in this basis has a matrix representation:

\[ {\bf A} = \begin{bmatrix} {\bf A}_{k \times k} & {\bf 0}_{k \times (n-k)} \\ {\bf 0}_{k \times (n-k)} & {\bf A}_{(n-k)\times (n-k)} \end{bmatrix} . \]
Therefore, the block diagonal matrix A is the direct sum of two matrices of lower sizes.
Example 3: Consider the Cartesian plane ℝ² when every element is represented by an ordered pair v = (x, y). This vector has a unique decomposition \( {\bf v} = (x,y) = {\bf v}_1 + {\bf v}_2 = (x,0) + (0,y) , \) where vectors (x, 0) and (0, y) can be identified with a one-dimensional space ℝ¹ = ℝ.

If we choose two arbitrary not parallel vectors u and v on the plane, then spans of these vectors generate two vectors spaces that we denote by U and V, respectively. Therefore, U and V are two lines containing vectores u and v, respectively. Their sum, \( U + V = \left\{ {\bf u} +{\bf v} \,: \ {\bf u} \in U, \ {\bf v} \in V \right\} \) is the whole plane \( \mathbb{R}^2 . \)

g1 = Graphics[{Blue, Thickness[0.01], Arrow[{{0, 0}, {3, 1}}]}]
g2 = Graphics[{Blue, Thickness[0.01], Arrow[{{0, 0}, {1, 3}}]}]
g3 = Graphics[{Green, Thickness[0.01], Arrow[{{0, 0}, {4, 4}}]}]
g4 = Graphics[{Cyan, Thickness[0.005], Line[{{1, 3}, {4, 4}}]}]
g5 = Graphics[ Text[Style[
ToExpression["u + v \\notin U\\cup V", TeXForm, HoldForm],
FontSize -> 14, FontWeight -> "Bold", FontColor -> Black], {3.6, 3.5}, {0, 1}, {1, 1}]]
g6 = Graphics[
Text[StyleForm["u", FontSize -> 14, FontWeight -> "Bold",
FontColor -> Blue], {2.6, 1.2}, {0, 0.8}, {2.8, 1}]]
g7 = Graphics[
Text[StyleForm["v", FontSize -> 14, FontWeight -> "Bold",
FontColor -> Blue], {1.1, 2.8}]]
Show[g1, g2, g3, g4, g5, g6, g7]
The union \( U \cup V \) of two subspaces is not necessarily a subspace.
End of Example 3

Suppose we have a vector space V over field 𝔽 and subspaces W₁, W₂, … , Wn of V. Then it is not easy to check whether every element in V has a unique representation as the sum of elements of W₁, W₂, … , Wn. The following theorem provides a solution for this.

Theorem 2: Let V be a vector space over field 𝔽 and W₁, W₂, … , Wn be subspaces of V. Then V = = W₁ ⊕ W₂ ⊕ ⋯ ⊕ Wn if and only if the following conditions are satisfied:

  1. V = W₁ + W₂ + ⋯ + Wn;
  2. zero vector has only the trivial representation.
Let V = W₁ ⊕ W₂ ⊕⋯ ⊕ Wn. Then by the definition of direct sum both conditions (i) and (ii) hold. Conversely, suppose that both (i) and (ii) hold. Let vV have two representations, namely \[ \mathbf{v} = v_1 + v_2 + \cdots + v_n \] and \[ \mathbf{v} = u_1 + u_2 + \cdots + u_n , \] where vi, uiWi for all i = 1, 2, … , n. Then subtracting these equations gives \[ 0 = \left( v_1 - u_1 \right) + \left( v_2 - u_2 \right) + \cdots + \left( v_n - u_n \right) \] and as zero has trivial representation only, viui = 0 for all i = 1, 2, … , n which implies vi = ui for all i = 1, 2, … , n. That is, every vector has a unique representation. Therefore. V = W₁ ⊕ W₂ ⊕⋯ ⊕ Wn.
   
Example 4: Let E denote the set of all polynomials of even powers: \( E = \left\{ a_n t^{2n} + a_{n-1} t^{2n-2} + \cdots + a_0 \right\} , \) and O be the set of all polynomails of odd powers: \( O = \left\{ a_n t^{2n+1} + a_{n-1} t^{2n-1} + \cdots + a_0 t \right\} . \) Then the set of all polynomials P is the direct sum of these sets: \( P = O\oplus E . \)

It is easy to see that any polynomial (or function) can be ubiquely decomposed into direct sum of even and odd counterparts:

\[ p(t) = \frac{p(t) + p(-t)}{2} + \frac{p(t) - p(-t)}{2} . \]
End of Example 2

Theorem 6: Let V be a finite-dimensional vector space over field . 𝔽 and V ₁, i>V ₂ be two subspaces of V, then \[ \dim \left( V_1 + V_2 \right) = \dim \left( V_1 \right) + \dim \left( V_2 \right) - \dim \left( V_1 \cap V_2 \right) . \]

Let V₁, V₂ be two subspaces of finite-dimensional vector space V, Then their intersection V ₁ ∩ V ₂ is also a subspace of V. Let β = {u₁, u₂, … , uk} be a basis of V ₁ ∩ V ₂. Since this intersection is a subspace of V₁, set β is a linearly independent set in V₁, and hence it can be extended to a basis γ = {u₁, u₁, … , uk, v₁, v₂, … , vm} of V₁. Similarly, let δ = {u₁, u₁, … , uk, w₁, w₂, … , wn} be a basis of V₂. Then \[ \varepsilon = \left\{ \mathbf{u}_1 , \ldots , \mathbf{u}_k , \mathbf{v}_1 , \cdots , \mathbf{v}_m , \mathbf{w}_1 , \cdots , \mathbf{w}_n \right\} \] is a spanning set of V₁ + V₂ because every its vector can be extended through elements of ε. Now we show that ε is a basis for V₁ + V₂. It is sufficient to show that ε is linearly independent. Let λ₁, λ₂, … λk, μ₁, μ₂, … , μm, ξ₁, ξ₂, … , ξn be a set of scalars such that \[ \lambda_1 \mathbf{u}_1 + \cdots + \lambda_k \mathbf{u}_k + \mu_1 \mathbf{v}_1 + \cdots + \mu_m \mathbf{v}_m + \xi_1 \mathbf{w}_1 + \cdots + \xi_n \mathbf{w}_n = 0 . \] This implies \begin{align*} \xi_1 \mathbf{w}_1 + \cdots + \xi_n \mathbf{w}_n &= - \left( \lambda_1 \mathbf{u}_1 + \cdots + \lambda_k \mathbf{u}_k + \mu_1 \mathbf{v}_1 + \cdots + \mu_m \mathbf{v}_m \right) \\ &\quad \in V_1 \cap V_2 \end{align*} as γ is basis for V₁ and δ is basis for V₂. Since β is a basis for V₁ ∩ V₂, there exist scalars α₁, α₂, … , αk such that \[ \xi_1 \mathbf{w}_1 + \cdots + \xi_n \mathbf{w}_n = \alpha_1 \mathbf{u}_1 + \cdots + \alpha_k \mathbf{u}_k \in V_1 \cap V_2 . \] Since {u₁, u₁, … , uk, w₁, w₂, … , wn} is basis for V₂ the above equation implies that ξ₁ = ⋯ = ξn = α₁ = ⋯ = αk = 0. So we get \[ \lambda_1 \mathbf{u}_1 + \cdots + \lambda_k\mathbf{u}_k + \mu_1 \mathbf{v}_1 + \cdots + \mu_m \mathbf{v}_m = 0 . \] Since {u₁, u₁, … , uk, v₁, v₂, … , vm} is a basis of V₁, we conclude that λ₁ = ⋯ = λk = μ₁ = ⋯ = μm = 0. That is, ε is linearly independent. Thus, we have shown that ε is a basis forV₁ + V₂. Now \begin{align*} \dim\left( V_1 + V_2 \right) &= k+m+n \\ &= \left( k + m \right) + \left( k + n \right) - k \\ &= \dim \left( V_1 \right) + \dim \left( V_2 \right) - \dim \left( V_1\cap V_2 \right) . \end{align*}
   
Example 9: We consider the vector space of real 2-by-2 matrices, ℝ2×2, and two its subspaces \[ V_1 = \left\{ \begin{bmatrix} a_{1,1} & a \\ a & a_{2,2} \end{bmatrix} \ : \ a_{1,1}, a, a_{2,2} \in \mathbb{R} \right\} \] and \[ V_2 = \left\{ \begin{bmatrix} a_{1,1} & - a \\ a & a_{2,2} \end{bmatrix} \ : \ a_{1,1}, a, a_{2,2} \in \mathbb{R} \right\} . \] It is clear that multiplication by a scalar and adding matrices from either V₁ or V₂ lead to the elements from the same space. Indeed, for a symmetric matrix from V₁, we have \[ \lambda \begin{bmatrix} a_{1,1} & a \\ a & a_{2,2} \end{bmatrix} = \begin{bmatrix} \lambda\, a_{1,1} & \lambda\, a \\ \lambda\, a & \lambda\, a_{2,2} \end{bmatrix} \in V_1 . \] For two anti-symmetric matrices, we have \[ \begin{bmatrix} a_{1,1} & -a \\ a & a_{2,2} \end{bmatrix} + \begin{bmatrix} b_{1,1} & -b \\ b & b_{2,2} \end{bmatrix} = \begin{bmatrix} a_{1,1} + b_{1,1} & - \left( a+b \right) \\ a+b & a_{2,2} + b_{2,2} \end{bmatrix} \in V_2 . \] The set of all 2-by-2 matrices is four dimensional because its basis consists of four matrice \[ \begin{bmatrix} 1&0 \\ 0&0 \end{bmatrix} , \quad \begin{bmatrix} 0&1 \\ 0&0 \end{bmatrix} , \quad \begin{bmatrix} 0&0 \\ 1&0 \end{bmatrix} , \quad \begin{bmatrix} 0&0 \\ 0&1 \end{bmatrix} . \] The space of symmetric matrices, V₁, is three dimensional because its basis contains only three of them (off diagonal entries are specified by one real number): \[ \begin{bmatrix} 1&0 \\ 0&0 \end{bmatrix} , \quad \begin{bmatrix} 0&1 \\ 1&0 \end{bmatrix} , \quad \begin{bmatrix} 0&0 \\ 0&1 \end{bmatrix} . \] The set of anti-symmetric matrices is generated by three matrices: \[ \begin{bmatrix} 1&0 \\ 0&0 \end{bmatrix} , \quad \begin{bmatrix} 0&-1 \\ 1&0 \end{bmatrix} , \quad \begin{bmatrix} 0&0 \\ 0&1 \end{bmatrix} . \] The intersection of two subspaces is two dimensional: \[ V_1 \cap V_2 = \left\{ \begin{bmatrix} a_{1,1}&0 \\ 0&0 \end{bmatrix} \quad\mbox{or}\quad \begin{bmatrix} 0&0 \\ 0&a_{2,2} \end{bmatrix} \ : \quad a_{1,1}, a_{2,2} \in \mathbb{R} \right\} . \] Therefore, dim(V₁) = 3, dim(V₂) = 3, and dim(V₁ ∩ V₂) = 2. From Theorem 5, we get \[ \dim\left( \mathbb{R}^{2\times 2} \right) = \dim \left( V_1 \right) + \dim \left( V_2 \right) - \dim \left( V_1 \cap V_2 \right) = 3 + 3 -2 = 4. \]    ■
End of Example 9

Theorem 7: Let V be a finite-dimensional vector space over field 𝔽, and let V₁, V₂, … , Vn be subspaces of V such that V = V₁ + V₂ + ⋯ + Vn and . dim(V) = dim(V₁) + dim(V₂) + ⋯ + dim(Vn). Then V = V₁ ⊕ V₂ ⊕ ⋯ ⊕ Vn.

Let V be a finite-dimensional vector space with V₁, V₂, … , Vn as subspaces of V. Consider a basis . βi for each i = 1, 2, … , n and let β = ∪ ni=1βi. It is given that β pans V because V = V₁ + V₂ + ⋯ + Vn. Now suppose that β is linearly dependent. Then at least one of the vectors can be written as a linear combination of other vectors. Then dim(V) < dim(V₁) + dim(V₂) + ⋯ + dim(Vn), which is a contradiction. Therefore, B is linearly independent and hence B is a basis of V.

Now let 0 = v₁ + v₂ + ⋯ + vn, where viVi. Since βi is a basis for .Vi, each viVi can be expressed uniquely as a sum of elements in βi. i.e., 0 can be written as a linear combination of elements of β. As β is a basis for V, this implies that the coefficients are zero. That is, vi = 0 for all i = 1, 2, … , n. Therefore, V = V₁ ⊕ V₂ ⊕ ⋯ ⊕ Vn.

   
Example 10:    ■
End of Example 10

Annihilators and Direct Sums

Consider a direct sum decomposition of a vector space over field 𝔽:
\begin{equation} \label{EqDirect.1} V = S \oplus T . \end{equation}
Then any linear functional φTT* can be extended to a linear functional φ on V by setting φ(S) = 0. Let us call this extension by 0. Clearly, φ ∈ S⁰, the annihilator of S. Therefore, the mapping φT ⇾ φ is an iso,orphism from T* to S⁰, whose inverse is the restriction to T.

Theorem 4: Let V = ST be a direct decomposition of a vector space V. The extension by 0 map is an isomorphism from T* to S⁰, and so \[ T^{\ast} \cong S^0 . \] If V is finite-dimensional, then \[ \dim\left( S^0 \right) = \mbox{codim}\left( S \right) = \dim \left( V/S \right) = \dim V - \dim S . \]

Example 6: Let V be the vector space over ℤ₂ with a countably infinite ordered basis ε = (e₁, e₂, e₃, …). We consider two its subspaces that are spanned on two distinct sets of basis elements: S = span{e₁} and T = span{e₂, e₃, …}. The annihilator of S is congruent to S⁰ ≌ T* ≌ V*.
End of Example 6
    The annihilator provides a way to describe the dual space of a direct sum.

Theorem 5: A linear functional on the direct sum V = ST can be written as a sum of a linear functional that annihilates S and a linear functional that annihilates T, that is, \[ \left( S \oplus T \right)^{\ast} = S^0 \oplus T^0 . \]

Clearly S⁰ ∩ T⁰ = {0} because any functional that annihilates both S and T must annihilate ST = V. Hence, the sum S⁰ + T⁰ is direct. We have \[ V^{\ast} = \{ 0\}^0 = \left( S \cap T \right)^0 = S^0 + T^0 = S^0 \oplus T^0 . \] Alternatively, since φS + φT is the identity map, if φ ∈ V*, then we can write \[ \phi = \phi \circ \left( \phi_S + \phi_T \right) = \left( \phi \circ \phi_S \right) + \left( \phi \circ \phi_T \right) \in S^0 \oplus T^0 , \] so VS⁰ ⊕ T⁰.
   
Example 8:    ■
End of Example 8

 

 

  1. Suppose \[ U = \left\{ (x, x, y, y) \in\Mathbb{F}^4 \quad : \quad x, y \in \mathbb{F} \right\} \] ind a subspace W of 𝔽4 such that 𝔽4 = VW.
  2. Let \[ U = \left\{ \left( x, y, x+y , x-y , 3\,y \right) \in \mathbb{F}^5 \ : \quad x, y \in \mathbb{F} \right\} . \] Find a subspace W of 𝔽5 such that 𝔽5 = UW.
  3. For any i, 1 ≤ in, prove \[ V_1 + V_2 + \cdots + V_n = \left( V_1 + V_2 + \cdots + V_i \right) + \left( V_{i+1} + \cdots + V_n \right) . \]
  4. Suppose \[ U = \left\{ \left( x, y, x+y , x-y , 3\,y \right) \in \mathbb{F}^5 \ : \quad x, y \in \mathbb{F} \right\} . \] Find three subspaces W₁, W₂, W₃ of 𝔽5, none of which equals f0}, such that 𝔽5 = W₁ ⊕ W₂ ⊕ W₃. that
  5. Prove or give a counterexample: if V₁, V₂, W are subspaces of V such that \[ V = V_1 \oplus W \quad\mbox{and} \quad V = V_2 \oplus W , \] then V₁ = V₂.
  6. Suppose U is a subspace of vector space V. What is U + U ?

  1. Axier, S., Linear Algebra Done Right. Undergraduate Texts in Mathematics (3rd ed.). Springer. 2015, ISBN 978-3-319-11079-0.
  2. Beezer, R.A., A First Course in Linear Algebra, 2017.
  3. Dillon, M., Linear Algebra, Vector Spaces, and Linear Transformations, American Mathematical Society, Providence, RI, 2023.
  4. Halmos, Paul Richard (1974) [1958]. Finite-Dimensional Vector Spaces. Undergraduate Texts in Mathematics (2nd ed.). Springer. ISBN 0-387-90093-4.
  5. Leslie. P.H., On the use of matrices in certain population mathematics. Biometrika 33, pages 183–212, 1945.
  6. Roman, Steven (2005). Advanced Linear Algebra. Undergraduate Texts in Mathematics (2nd ed.). Springer. ISBN 0-387-24766-1.