Subspaces

Homogeneous systems

Image & ker

Duality

Intersections & Spans

Lemma 1: Let U and V be subspaces of an 𝔽-vector space W (where field of scalars 𝔽 is either ℚ or ℝ or ℂ). Their intersection
\[ U \cap V = \left\{ {\bf v}\,:\, {\bf v} \in U \quad\mbox{and}\quad {\bf v} \in V \right\} \]
is a subspace of W.
The zero vector is in both U and V, so it is in their intersection UV. Thus, UV is nonempty. If v1, v2UV and k is an arbitrary scalar, then kv1 + v2 belongs to both U and V by assumption.
Example 4: Let ℙ3 be the set of all polynomials of degree 3 or less. It is a vector space. Consider another vector space ℙ2 of polynomials of degree up to 2. It is a vector space as well. Obviously, ℙ2 is a subspace of ℙ3.
End of Example 4
We raise here the following question: “can a vector space V be written as a finite union of proper subspaces”? We show in the following example that it is impossible when scalars are either real numbers or complex numbers.
Lemma 2: The union of two subspaces is a subspace if and only if one of the subspaces is contained in the other.
If UV is a subspace, but neither U nor V is contained in the other, then we can choose uU\V and v &isin V\U. By assumption, w = u + vUV, so w belongs to either U or V. In the former case, v = w - u &isin V, contradiction. Latter case, similarly.
Example 1: Let ℙ be the set of polynomials; it has two subspaces ℙeven and ℙodd. However, x² ∈ ℙeven and x ∈ ℙodd, but x + x² ∉ ℙeven∪ℙodd.
Example 2: In the real vector space ℝ² (considered as the xy-plane), the x axis X = { [x, 0]T : x ∈ ℝ } and y axis Y = { [0, y]T : y ∈ ℝ } are subspaces, but their union is not a subspace of ℝ² because [1, 0]T + [0, 1]T = [1, 1]TXY.
The linear span (also called just span) of a set of vectors in a vector space is the intersection of all linear subspaces which each contain every vector in that set. Alternatively, the span of a set S of vectors may be defined as the set of all finite linear combinations of elements of S. The linear span of a set of vectors is therefore a vector space.
Example 3: Suppose that a vector space V is a union of two proper subspaces: V = V1V2 as a union of two proper subspaces. By hypothesis, one can find two non-zero vectors v1V1\V2 and v2V2\V1. The relation v1 + v2V1 leads to the contradiction: v2 = (v1 + v2) - v1V1 while supposing v1 + v2V2 leads to the contradiction: v1 = (v1 + v2) - v2V2. Therefore, a vector space can never be written as a union of two proper subspaces.    ■
Let U and V be subspaces of an 𝔽-vector space W. The sum of U and V is the subspace span(UV). It is denoted by U + V.

The above theorem ensures us that any pair of subspaces V and W of a finite dimensional vector space U has a finite dimensional sum

\[ V + W = \mbox{span} \left\{ V\cup W \right\} = \left\{ {\bf v} + {\bf w} \, \big| \, {\bf v} \in V, \ {\bf w} \in W \right\} . \]
Therefore, a sum of two subspaces is the set of all possible sums v + w of all possible vectors from each subspace. The sum of two subspaces is a subspace, and it is contained inside any subspace that contains VW. One also can say that V + W is the subspace generated by V and W. This actually gives a clearer idea of its definition. n practice, V + W contains any linear combination of elements drawn from V and W. We can also think of V + W as the intersection of all (typically infinitely many) subspaces containing both V and W.

The intersection VW of two subspaces is always a subspace of their embedding space U. So any basis for VW can be extended to a basis for V; it can be extended to a basis for W.

Theorem 3: Let V and W be a subspaces of a finite dimensional vector space U. Then
\[ \mbox{dim}\left( V \cap W \right) + \mbox{dim} \left( V + W \right) = \left( V \right) + \left( W \right) . \]
Let k = dim(VW). Since VW is a subspace of both V and W, the preceding theorem ensures that k ≤ dim(V) and k ≤ dim(W). Let p = dim(V) - k and q = dim(W) - k. Let v1, v2, ... , vk be a basis for VW. Then there are vectors u1, u2, ... , up and w1, w2, ... , wq such that
\[ \left\{ {\bf v}_1 , {\bf v}_2 , \ldots , {\bf v}_k , {\bf u}_1 , {\bf u}_2 , \ldots , {\bf u}_p \right\} \]
is a basis for V and
\[ \left\{ {\bf v}_1 , {\bf v}_2 , \ldots , {\bf v}_k , {\bf w}_1 , {\bf w}_2 , \ldots , {\bf w}_q \right\} \]
is a basis for W. We must show that
\[ \mbox{dim} \left( V + W \right) = (p+k) + (q+k) -k = k+p+q . \]
Since every vector in V+W is the sum of a vector in V and a vector in W, the span of
\[ \left\{ {\bf v}_1 , {\bf v}_2 , \ldots , {\bf v}_k , {\bf u}_1 , {\bf u}_2 , \ldots , {\bf u}_p , {\bf w}_1 , {\bf w}_2 , \ldots , {\bf w}_q \right\} \]
is V+W. It suffices to show that the above list of vectors is linearly independent. Suppose that
\[ \sum_{i=1}^k a_i {\bf v}_i + \sum_{i=1}^p b_i {\bf u}_i + \sum_{i=1}^q c_i {\bf w}_i = {\bf 0} . \]
Then
\[ \sum_{i=1}^k a_i {\bf v}_i + \sum_{i=1}^p b_i {\bf u}_i = \sum_{i=1}^q \left( -c_i \right) {\bf w}_i . \]
The right-hand side is in W and its left-hand side is in V, so both sides are in VW. Thus, there are scalars d1, d2, ... , dk such that
\[ \sum_{i=1}^k a_i {\bf v}_i + \sum_{i=1}^p b_i {\bf u}_i = \sum_{i=1}^k d_i {\bf v}_i . \]
Consequently,
\[ \sum_{i=1}^k \left( a_i - d_i \right) {\bf v}_i + \sum_{i=1}^p b_i {\bf u}_i = {\bf 0} . \]
The linear independence of vectors vi and ui ensures that
\[ b_1 = b_2 = \cdots = b_p =0 \]
and it follows that
\[ \sum_{i=1}^k a_i {\bf v}_i + \sum_{i=1}^p b_i {\bf u}_i = {\bf 0} . \]
The linear independence of another list vi and wi ensures that
\[ c_1 = c_2 = \cdots = c_q =0 \]
Therefore, we conclude that the required list is linearly independent.
Theorem 4: Let V and W be a subspaces of a finite dimensional vector space U and let k be a positive integer.
  1. If dim(V) + dim(W) > dim(U), then VW contains a nonzero vector.
  2. If dim(V) + dim(W) ≥ dim(U) + k, then VW contains k linearly independent vectors.
The assertion (a) is the case k = 1 of the assertion (b). Under the hypothesis in (b)
\[ \mbox{dim} \left( V \cap W \right) = \mbox{dim} (V) + \mbox{dim}(W) - \mbox{dim} \left( V+W \right) \ge \mbox{dim} (V) + \mbox{dim}(W) - \mbox{dim} (U) \ge k , \]
so VW has a basis comprising at least k vectors.
Example 5: A sum of subspaces can be less than the entire space. Inside of P3, the set of all polynomails of one variable of degree up to 3. Let V be the subspace of linear polynomials \( V = \left\{ a + b\, x \,\big|\, a , b ∈ \mathbb{R} \right\} \) and let W be the subspace of purely-cubic polynomials\( W = \left\{ c\, x^3 \,\big|\, c ∈ \mathbb{R} \right\} . \) Then V + W is not all of P3. Instead, it is the subspace
\[ V + W = \mbox{span} \left\{ V\cup W \right\} = \left\{ a + b\, x + c\, x^3 \,\big|\, a , b , c \in \mathbb{R} \right\} . \qquad\blacksquare \]

 

Spans


Theorem 2: If S = { u1, u2, … , ur } is a nonempty set of vectors in a vector space V, then
  1. The span of S (that is, the set of all possible linear combinations of the vectors in S) is a subspace of V.
  2. The set U = span(S) is the "smallest" subspace of V that contains all of the vectors from S in the sense that any other subspace that contains those vectors contains U.
Let U be the set of all possible linear combinations of the vectors in S. We must show that U is closed under vector addition and scalar multiplication. To prove closure under addition, let
\[ {\bf u} = c_1 {\bf u}_1 + c_2 {\bf u}_2 + \cdots + c_r {\bf u}_r \qquad\mbox{and} \qquad {\bf v} = k_1 {\bf u}_1 + k_2 {\bf u}_2 + \cdots + k_r {\bf u}_r \]
be two vectors in U. It follows that their sum can be written as
\[ {\bf u} + {\bf v} = \left( c_1 + k_1 \right) {\bf u}_1 + \left( c_2 + k_2 \right) {\bf u}_2 + \cdots + \left( c_r + k_r \right) {\bf u}_r , \]
which is a linear combination of the vectors in S. Thus, U is closed under vector addition. Similarly, it can be shown that sclar multiplication is also closed.

Proof of part (b): Let W be any subspace of V that contains all of the vectors in S. Since W is closed under vector addition and scalar multiplication, it contains all linear combinations of the vectors from S and hence contains U.

Example 5: In two dimensional Euclidean space ℝ², consider an arbitrary nonzero vector v. Let U be the set of all real scalar multiples of v. Geometrically, U is a line in the direction of vector v through the origin. Then U is a subspace of ℝ².

Indeed, for any two vectors x = αv and y = βv, we have

\[ {\bf x} + {\bf y} = \alpha\,{\bf v} + \beta\,{\bf v} = \left( \alpha + \beta \right) {\bf v} \in U . \]
Also, for arbitrary scalar k, we have
\[ k\,{\bf x} = k\,\alpha\,{\bf v} = \left( k\,\alpha \right) {\bf v} \in U . \]
Example 5: Let U be the set of all vectors of the form (𝑎, b − 𝑎, 3b, 𝑎 - 2b), where 𝑎 and b are arbitrary scalas. That is, let U be the set
\[ U = \left\{ (a, b -a, 3b, a-2b) \, : \, a, b \in \mathbb{F} \right\} \]
To show that U is a subspace of 𝔽4, we write each vector from U as a column-vector:
\[ \begin{pmatrix} a \\ b - a \\ 3b \\ a-2b \end{pmatrix} = a \begin{pmatrix} \phantom{-}1 \\ - a \\ \phantom{-}0 \\ \phantom{-}1 \end{pmatrix} + b \begin{pmatrix} \phantom{-}0 \\ \phantom{-}1 \\ \phantom{-}3 \\ -2 \end{pmatrix} = a {\bf v} + b {\bf u} , \]
where
\[ {\bf v} = \begin{pmatrix} \phantom{-}1 \\ - a \\ \phantom{-}0 \\ \phantom{-}1 \end{pmatrix} , \qquad {\bf u} = \begin{pmatrix} \phantom{-}0 \\ \phantom{-}1 \\ \phantom{-}3 \\ -2 \end{pmatrix} \,\in \mathbb{F}^4 . \]
This calculation shows that U is the span of these two vectors v and u shown above.