es
Before we define dimension of a vector space, we need some preliminary information that we formulate as theorems.

Theorem 1: Let V be a vector space over field 𝔽, and let β = { b1, b2, … , bn } be a basis of V. If set S = { v1, v2, … , vm } contains more (m > n) vectors than there are elements in the basis, then set S is linearly dependent.

Since β is a basis, there exist scalars 𝑎i,j, i = 1, 2, … , m, j = 1, 2, … , n in 𝔽 such that \[ {\bf v}_i = a_{i,1} {\bf b}_1 + a_{i,2} {\bf b}_2 + \cdots + a_{i,n} {\bf b}_n , \qquad i=1,2,\ldots , m. \] Let x1, x2, … , xm be scalars, then \begin{align*} {\bf u} &= x_1 {\bf v}_1 + x_2 {\bf v}_2 + \cdots + x_m {\bf v}_m \\ &= \left( x_1 a_{1,1} + \cdots x_m a_{m,1} \right) {\bf b}_1 + \cdots + \left( x_1 a_{1,n} + \cdots x_m a_{m,n} \right) {\bf b}_n \end{align*} Since m > n, the homogeneous system \[ \begin{split} x_1 a_{1,1} + x_2 a_{2,1} + \cdots + x_m a_{m,1} &= 0, \\ x_1 a_{1,2} + x_2 a_{2,2} + \cdots + x_m a_{m,2} &= 0, \\ \vdots \qquad & \vdots \\ x_1 a_{1,n} + x_2 a_{2,n} + \cdots + x_m a_{m,n} &= 0 , \end{split} \] has a nontrivial solution. For such a solution (x1, x2, … , xn), we have x1v1 + x2v2 + ⋯ + xn = 0, which means that vectors { v1, v2, … , vm } are linearly dependent.
   
Example 1:    ■
End of Example 1

Theorem 2: Let V be a vector space over field 𝔽, and let α = { a1, a2, … , an } and β = { b1, b2, … , bm } be two bases of V. Then n = m.

In view of Theorem 1, m > n is impossible. Now since β is also a basis, n > m is also impossible.
   
Example 2: A spherical coordinate system is a coordinate system for three-dimensional space ℝ³, where the position of a point is specified by three numbers: the radial distance of that point from a fixed origin; its zenith angle (or inclination angle or polar angle) measured from an applicant direction; and the azimuthal angle of its orthogonal projection on a reference plane that passes through the origin and is orthogonal to the fixed axis, measured from another fixed reference direction on that plane. The spherical coordinates (ρ, θ, ϕ) are related to the Cartesian coordinates (x, y, z) by
\[ \begin{split} x &= \rho\,\cos\theta\,\sin\phi , \\ y &= \rho\,\sin\theta \,\sin\phi , \\ z &= \rho\,\cos\phi , \end{split} \qquad \rho \geqslant 0, \quad \theta \in [0, 2\pi ), \quad \phi \in [0, \pi ] , \]
where the radial, azimuth, and zenith angle coordinates are taken as ρ, θ, and ϕ, respectively. As usual, the location of a point (x, y, z) is specified by the distance ρ of the point from the origin, the angle ϕ between the position vector and the z-axis, the polar angle measured down from the north pole, and the azimuthal angle θ from the x-axis to the projection of the position vector onto the xy plane, analogous to longitude in earth measuring coordinates:
\[ \rho = \sqrt{x^2 + y^2 + z^2} \geqslant 0 \qquad\mbox{and} \qquad \phi = \mbox{arccos} \frac{z}{\rho} = \begin{cases} \arctan \frac{\sqrt{x^2 + y^2}}{z} , & \mbox{ if } z > 0, \\ \pi + \arctan \frac{\sqrt{x^2 + y^2}}{z} , & \mbox{ if } z < 0, \\ + \frac{\pi}{2} , & \mbox{ if } z = 0 \mbox{ and } xy \ne 0 , \\ \mbox{undefined} , & \mbox{ i } x=y=z=0 . \end{cases} \]
While zenith angle is take from the closed interval 0 ≤ ϕ ≤ π, the azimuth angle belongs to semiclosed interval 0 ≤ θ < 2π.

     
az = Graphics[{Black, Thickness[0.01], Arrowheads[0.1], Arrow[{{0, -0.2}, {0, 1.2}}]}];
ax = Graphics[{Black, Thickness[0.01], Arrowheads[0.1], Arrow[{{0.4, 0}, {-0.65, -0.65}}]}];
ay = Graphics[{Black, Thickness[0.01], Arrowheads[0.1], Arrow[{{-0.4, 0}, {1.2, 0}}]}];
line = Graphics[{Blue, Thickness[0.01], Line[{{0, 0}, {0.6, 0.86}}]}];
line1 = Graphics[{Black, Dashed, Line[{{0.6, 0.86}, {0.6, -0.6}}]}];
line2 = Graphics[{Black, Line[{{0, 0}, {0.6, -0.6}}]}];
circle1 = Graphics[{Red, Thick, Circle[{0, 0}, 0.8, {Pi/2, 0.97}]}];
circle2 = Graphics[{Red, Thick, Circle[{0, 0}, 0.5, {-Pi/4, -3*Pi/4}]}];
ar1 = Graphics[{Red, Arrowheads[0.05], Arrow[{{0.36, 0.72}, {0.42, 0.67}}]}];
ar2 = Graphics[{Red, Arrowheads[0.05], Arrow[{{0.3, -0.395}, {0.35, -0.355}}]}];
disk = Graphics[Disk[{0.6, 0.86}, 0.03]];
tx = Graphics[{Black, Text[Style["x", 20], {-0.6, -0.45}]}];
ty = Graphics[{Black, Text[Style["y", 20], {1.1, 0.15}]}];
tz = Graphics[{Black, Text[Style["z", 20], {0.15, 1.1}]}];
tf = Graphics[{Black, Text[Style[\[Phi], 20], {0.18, 0.6}]}];
tt = Graphics[{Black, Text[Style[\[Theta], 20], {0, -0.4}]}];
tp = Graphics[{Black, Text[Style["(x,y,z)", 20], {0.6, 1.0}]}];
tr = Graphics[{Black, Text[Style[\[Rho], 20], {0.5, 0.57}]}];
Labeled[Show[ax, ay, az, line, line1, line2, circle1, circle2, ar1, ar2, disk, tx, ty, tz, tf, tt, tp, tr], "Spherical Coordinates"]
       Spherical coordinates.            Mathematica code

Unit vectors in spherical coordinates are

\[ \hat{\rho} = \begin{pmatrix} \cos\theta\,\sin\phi \\ \sin\theta \,\sin\phi \\ \cos\phi \end{pmatrix} , \qquad \hat{\theta} = \begin{pmatrix} -\sin\theta \\ \cos\theta \\ 0 \end{pmatrix} , \qquad \hat{\phi} = \begin{pmatrix} \cos\theta \,\cos\phi \\ \sin\theta \,\cos\phi \\ -\sin\phi \end{pmatrix} . \]
They can be expressed through Cartesian unit vectors as follows:
\[ \begin{split} \hat{\rho} &= \cos\theta\,\sin\phi \,\hat{\bf x} + \sin\theta \,\sin\phi \,\hat{\bf y} + \cos\phi\,\hat{\bf z} , \\ \hat{\theta} &= -\sin\theta \,\hat{\bf x} + \cos\theta \,\hat{\bf y} , \\ \hat{\phi} &= \cos\theta\,\cos\phi\,\hat{\bf x} + \sin\theta \,\cos\phi\,\hat{\bf y} -\sin\phi\,\hat{\bf z} , \end{split} \]
and
\[ \begin{split} \hat{\bf x} &= {\bf i} = \cos\theta\,\sin\phi\,\hat{\rho} + \sin\theta\, \hat{\theta} + \cos\theta\, \cos\phi\,\hat{\phi} , \\ \hat{\bf y} &= {\bf j} = \sin\theta\,\sin\phi\,\hat{\rho} + \cos\theta \,\hat{\theta} + \sin\theta\,\cos\phi\, \hat{\phi} , \\ \hat{\bf z} &= {\bf k} = \cos\phi\,\hat{\rho} - \sin\phi\,\hat{\phi} . \end{split} \]
Solve[{rr == Sin[\[Phi]]*Cos[\[Theta]]*x + Sin[\[Phi]]*Sin[\[Theta]]*y + Cos[\[Phi]]*z, pp == Cos[\[Phi]]*Cos[\[Theta]]*x + Cos[\[Phi]]*Sin[\[Theta]]*y - Sin[\[Phi]]*z, tt == -Sin[\[Theta]]*x + Cos[\[Theta]]*y}, { x, y, z}]Solve[{rr == Sin[\[Phi]]*Cos[\[Theta]]*x + Sin[\[Phi]]*Sin[\[Theta]]*y + Cos[\[Phi]]*z, pp == Cos[\[Phi]]*Cos[\[Theta]]*x + Cos[\[Phi]]*Sin[\[Theta]]*y - Sin[\[Phi]]*z, tt == -Sin[\[Theta]]*x + Cos[\[Theta]]*y}, { x, y, z}]
FullSimplify[%];
TableForm[Transpose[%]]
{{x -> -tt Sin[\[Theta]] + Cos[\[Theta]] (pp Cos[\[Phi]] + rr Sin[\[Phi]]), y -> Cos[\[Theta]] (tt + pp Cos[\[Phi]] Tan[\[Theta]] + rr Sin[\[Phi]] Tan[\[Theta]]), z -> rr Cos[\[Phi]] - pp Sin[\[Phi]]}}
A position vector in spherical coordinates is described in terms of the radial parameters ρ, which depends on the zenith (or polar) angle ϕ and the azimuthal angle θ as follows:
\[ {\bf r} = \rho\,\hat{\rho} (\theta , \phi ) . \]

\[ \theta = \mbox{sign}(y)\,\arccos \frac{x}{\sqrt{x^2 + y^2}} = \begin{cases} \arctan \left( y/x \right) , & \ \mbox{ if} \quad x > 0 , \\ \arctan \left( y/x \right) + \pi , & \ \mbox{ if} \quad x < 0 \mbox{ and } y \ge 0, \\ \arctan \left( y/x \right) - \pi , & \ \mbox{ if} \quad x < 0 \mbox{ and } y < 0, \\ +\frac{\pi}{2} , & \ \mbox{ if} \quad x = 0 \mbox{ and } y > 0, \\ -\frac{\pi}{2} , & \ \mbox{ if} \quad x = 0 \mbox{ and } y < 0, \\ \mbox{undefined}, & \ \mbox{ if} \quad x = 0 \mbox{ and } y=0. \end{cases} . \]
   ■
End of Example 2

Dimension

Theorem 2 leads to the following definition.
Let V be an 𝔽-vector space (where 𝔽 is either ℚ or ℝ or ℂ). This vector space is said to be of finite dimension n, or n-dimensional, written as dim𝔽V = dim V = dim(V) = n, if V has a basis consisting of n vectors. If V = {0}, we say V has dimension 0. If V does not have a finite basis, then V is said to be of infinite dimension, or infinite-dimensional.
   
Example 3: We consider the space of all square matrices with real entries, denoted by ℝn×n. The dimension of ℝn×n is n² because this is the number of entries in n×n matrix. We also consider a subspace, which we denote by K, of all skew symmetric matrices with real entries. Recall that a square matrix A is called skew symmetric when A = −AT. So a skew symmetric matrix is defined as the square matrix that is equal to the negative of its transpose matrix.

To illustrate skew symmetric matrices, we present two examples: \[ \begin{bmatrix} \phantom{-}0&1&\phantom{-}2 \\ -1 & 0 &-3 \\ -2&3&\phantom{-}0 \end{bmatrix} \qquad \mbox{and} \qquad \begin{bmatrix} \phantom{-}0&\phantom{-}1&\phantom{-}2&3 \\ -1&\phantom{-}0&\phantom{-}4&5 \\ -2&-4 & \phantom{-}0 &6 \\ -3& -5& -6 & 0\end{bmatrix} . \] As we see from these two examples, there are n = 3 distinct skew symmetric 3×3 matrices and six skew symmetric 4×4 matrices. In general, there are n(n −1)/2 skew symmetric n × n matrices. Indeed, a skew symmetric matrix must have zero diagonal, All other entries are determined either by upper triangular entries or lower triangular entries because they differ by a sign. So we need to calculate the number of no zero entries above the main diagonal.

The first row contains n −1 terms, the second row contains n −2, and so on. Hence the total number of entries above the main diagonal is \[ m = \sum_{k=1}^{n-1} k = \frac{n\left( n-1 \right)}{2} . \] This is the dimension of the set of all n × n skew symmetric matrices.

Sum[k, {k, 1, n - 1}]
1/2 (-1 + n) n
   ■
End of Example 3

Theorem 3: Let V be an n-dimensional vector space, and let S = { v1, v2, … , vn } be linearly independent set of vectors. Then S is a basis of V.

Since we know that dimension of V is n, set S generates V. Otherwise, there will be a vector vV outside span(S). This vector v must be linearly independent of S and by adding it to S will produce a set that generates n+1 subspace.
   
Example 4: The Pauli matrices are a set of three 2 × 2 complex matrices that are self-adjoint (Hermitian), involutory, and unitary. Usually indicated by the Greek letter sigma (σ); \[ \sigma_1 = \begin{bmatrix} 0&1 \\ 1&0 \end{bmatrix} , \quad \sigma_2 = \begin{bmatrix} 0& -{\bf j} \\ {\bf j} &0 \end{bmatrix} , \quad \sigma_3 = \begin{bmatrix} 1&\phantom{-}0 \\ 0&-1 \end{bmatrix} , \] where j is the imaginary unit vector in ℂ, so j² = −1. These matrices are named after the Austrian/German physicist Wolfgang Pauli (1900-1958). In quantum mechanics, they occur in the Pauli equation, which takes into account the interaction of the spin of a particle with an external electromagnetic field. They also represent the interaction states of two polarization filters for horizontal / vertical polarization, 45 degree polarization (right/left), and circular polarization (right/left).

Each Pauli matrix is self-adjoint (Hermitian, so σj = σj), and together with the identity matrix I (sometimes considered as the zeroth Pauli matrix σ₀), the Pauli matrices form a basis for the real vector space of 2 × 2 Hermitian matrices. This means that any 2 × 2 self-adjoint matrix can be written in a unique way as a linear combination of Pauli matrices, with all coefficients being real numbers. Hence, the vector space of self-adjoint 2 × 2 matrices over field of real numbers has dimension four.    ■

End of Example 4

Corollary 1: If a subset of a vector space V has fewer elements than the dimension of V, then this subset does not span V.

Corollary 2: If U is a subspace of the finite-dimensional vector space V, then U is also finite-dimensional and dim(U) ≤ dim(V) with equality if and only if U = V.

Let u1, u2, … , ur be a linearly independent set in U and suppose that dim(V) = n. According to the Steinitz substitution principle, rn. Now if the span of { u1, u2, … , ur } were smaller than U, then we could find a vector ur+1 in U but not in span{ u1, u2, … , ur }. The new set { u1, u2, … , ur, ur+1 } would also be linearly independent (we leave this fact as an exercise) and r + 1 ≤ n. Since we cannot continue adding vectors indefinitely, we have to conclude that at some point we obtain a basis { u1, u2, … , us } for U. So U is finite-dimensional and furthermore, sn, so we conclude that dim(U) ≤ dim(V). Finally, if we had equality, then a basis of U) would be the same size as a basis of V). However, Steinitz substitution ensures that any linearly independent set can be expanded to a basis of V. It follows that this basis for U is also a basis for V, whence U = V.
If the cardinality m of subset S exceeds the dimension n of the vector space, the set S is linearly dependent. Further, if mn, the set may or may not span the vector space.    
Example 5: Let us consider five vectors from ℚ4 that we organize in matrix form \[ {\bf A} = \begin{bmatrix} 1 & 2 & 0 & -1 & 3 & 0 \\ 5 & 2 & -1 & 3 & 4 & 1 \\ \end{bmatrix} . \] Since the number of vectors (six) exceeds the dimension of vector space (n = 4). we expect linear dependence between column vectors in matrix A
   ■
End of Example 5

Theorem 4: Let V be an n-dimensional vector space, and let S = { v1, v2, … , vr }, r < n, be linearly independent set of vectors of V. Then there are vectors T = { vr+1, vr+2, … , vn } in V such that ST = { v1, v2, … , vn } is a basis of V.

Since r < n, S cannot form a basis of V. Thus there exists a vector vr+1V that cannot lie in the subspace generated by S. We claim that this vector vr+1 is linearly independent of S, that is, the relation \[ c_1 {\bf v}_1 + c_2 {\bf v}_2 + \cdots + c_r {\bf v}_r + c_{r+1} {\bf v}_{r+1} = {\bf 0} \] implies c1 = c2 = ⋯ = cr = cr+1 = 0. Since elements in S are linearly independent, it suffices to show that cr+1 = 0. If we assume the opposite cr+1 ≠ 0, we can express the vector as a linear combination: \[ {\bf v}_{r+1} = - \frac{1}{c_{r+1}} \left( c_1 {\bf v}_1 + c_2 {\bf v}_2 + \cdots + c_r {\bf v}_r \right) , \] which contradicts our assumption that this vector does not lie in the space generated by { v1, v2, … , vr }. Now let us assume that vr+1, vr+2, … , vs have been found so that { v1, v2, … , vr, vr+1, … vs } is linearly independent. This implies that sn. Actually, s = n because this process of choosing linearly independent vector outside the span must stop, providing maximum number of linearly independent vectors. Theorem 2 ensures us that this set ST forms a basis.
   
Example 6: For given to vectors a = (1, 2, 1) and b = (-1, 0, 2). find another vectors that form a basis with these given vectors.

To find the third linearly independent vector, we consider the matrix whose columns are a, b, and e₁ = i, e₂ = j, e₃ = k. So this matrix becomes \[ {\bf A} = \begin{bmatrix} 1 & -1 & 1 & 0 & 0 \\ 2 & 0 & 0 & 1 & 0 \\ 1 & 2 & 0 & 0 & 1 \end{bmatrix} . \] Using Gauss elimination, we transfer matrix A to its equivalent form \[ {\bf U} = \begin{bmatrix} \end{bmatrix} . \]    ■

End of Example 6

Theorem 5: Let V be a finite dimensional vector space over a field 𝔽 and U be a subspace of V. Then dimU ≤ dimV. Equality holds only when U = V.

Let β = {b1, b2, … , bn} be a basis of the vector space V. Let α be a basis of subspace U. This implies that α is linearly independent set in U and hence *alpha; is also linearly independent set in V. Using Theorem 8 from the previous section, we arrive at the number of elements in the set α ≤ n, i.e., dimU ≤ dimV.

When dimU = dimV. A basis of U is also a linearly independent subset of V. But dimV = n forces us to conclude that α is a basis of V also. Now we conclude that span(α) = U = V. This implies that U = V.

   
Example 7: Let us consider a vector space ℝn×n of square n × n matrices with entries from the field of real numbers. The dimension of this space is n² because there are exactly n² linearly independent matrices Ei,j = [δi,j] with zero entries except (i, j) position which is 1 (or any non zero number). These matrices constitute a basis β = {[δi,j]} of square matrices having only one nonzero entry at cell (i, j), which is the Kronecker delta.

There is a subset of ℝn×n, known as the general linear group of degree n, that includes all n×n invertible matrices. This set is denoted as GL(n, ℝ) or GLn(𝔽) or just GLn when field is known. However, it is not a vector space with respect to addition of matrices because a sum of two matrices may be a singular matrix. For instance, \[ \begin{bmatrix} 1&0 \\ 0 &1 \end{bmatrix} + \begin{bmatrix} 0&1 \\ 1&0 \end{bmatrix} = \begin{bmatrix} 1&1 \\ 1&1 \end{bmatrix} . \] However, the general linear group is a vector space with respect to matrix multiplication, but we do not consider this operation over here. The set GL(n, ℝ) has the same basis β as ℝn×n; because every nonsingular square matrix is a linear combination of elementary matrices Ei,j. This allows us to claim that GL(n, ℝ) is of dimension n² while GL(n ⊂ ℝn×n. Hence, a proper subset GL(n, ℝ) has the same dimension as the largest space ℝn×n.

The set of invertible matrices GLn(ℝ) is dense in the space of all square matrices ℝn×n because they have the same dimension, n². However, the set of all singular matrices, which we denote as S, is a subspace of ℝn×n---we consider only matrices over field of real numbers in this example.

In case of 2 × 2 square matrices, ℝ2×2, the vector space of singular matrices consists of matrices of the form \[ \begin{bmatrix} a & b \\ c & d \end{bmatrix} , \qquad a\,d = b\,c . \] So we have one condition, which eliminates one of the parameters. However, one of these four parameters (𝑎, b, c, and d) can be chosen as 1 because a matrix can be multiplied by any nonzero number without changing its singularity (det = 0). Therefore, we have only two free parameters, and the dimension of the space of 2 × 2 singular matrices is 2.    ■

End of Example 7

Theorem 6: Let U be a subspace of a finite dimensional vector space V over a field 𝔽. Then dim(V/U) = dimV − dimU.

Let dimV = n, dimU = m, and let β = {v1, v2, … , vm} be a basis of U. Since {v1, v2, … , vm} is a linearly independent subset of V, the set β can be extended to a basis of V, say \[ \left\{ {\bf v}_1 , {\bf v}_2 , \ldots , {\bf v}_m , {\bf v}_{m+1} , \ldots , {\bf v}_n \right\} . \] Now consider the nm vectors {vm+1 + U, vm+2 + U, … , vn + U} in the quotient space V/U. Our claim is that the set α = {vm+1 + U, vm+2 + U, … , vn + U} is a basis of V/U and then dim(V/U) = nm = dimV − dimU. First we show that α spans V/U. Let v + UV/U. Then vV and there exist scalars c1, c2, … , cn ∈ 𝔽 such that v = c1 v1 + c2 v2 + ⋯ + cn vn . Therefore, \begin{align*} {\bf v} + U &= \left( c_1 {\bf v}_1 + c_2 {\bf v}_2 + \cdots + c_m {\bf v}_m \right) + U + \left( c_{m+1} {\bf v}_{m+1} + \cdots + c_n {\bf v}_n \right) + U \\ &= \left( c_{m+1} {\bf v}_{m+1} + c_{m+2} {\bf v}_{m+2} + \cdots + c_n {\bf v}_n \right) + U \\ &= c_{m+1} \left( c_{m+1} {\bf v}_{m+1} + U \right) + c_{m+2} \left( c_{m+2} {\bf v}_{m+2} + U \right) + \cdots + c_{n} \left( c_{n} {\bf v}_{n} + U \right) . \end{align*} This shows that α spans V / U. Further, we show that α is linearly independent. Let km+1, km+2, … , kn ∈ 𝔽 such that \[ k_{m+1} \left( {\bf v}_{m+1} + U \right) + k_{m+2} \left( {\bf v}_{m+2} + U \right) + \cdots + k_{n} \left( {\bf v}_{n} + U \right) = U \] or \[ \left( k_{m+1} {\bf v}_{m+1} + k_{m+2} {\bf v}_{m+2} + \cdots + k_n {\bf v}_n \right) + U = U . \] This implies that km+1 vm+1 + km+2 vm+2 + ⋯ + kn vnU. Therefore there exist δ1, δ2, … , δn ∈ 𝔽 such that \[ k_{m+1} {\bf v}_{m+1} + k_{m+2} {\bf v}_{m+2} + \cdots + k_n {\bf v}_n = \delta_1 {\bf v}_1 + \delta_2 {\bf v}_2 + \cdots + \delta_n {\bf v}_n \] This means \[ \delta_1 {\bf v}_1 + \delta_2 {\bf v}_2 + \cdots + \delta_m {\bf v}_m + \left( - k_{m+1} \right) {\bf v}_{m+1} + \cdots + \left( - k_n \right) {\bf v}_n = {\bf 0} . \] Since {v1, v2, … , vm, vm+1, … , vn} is a basis of V, we conclude that δ1 = δ2 = ⋯ = δm = −km+1 = ⋯ = −kn = 0 and hence in particular km+1 = km+2 = ⋯ = kn = 0; so α is linearly dependent. This completes the proof of our theorem.his completes the proof of our theorem.
   
Example 8:    ■
End of Example 8

Corollary 1: Let V be a vector space over a field 𝔽 and W be a subspace of V. Let W⁰ = {φ ∈ V : φ(v) = 0    for all vW} be the annihilator of subspace W. Then \[ \dim W + \dim W^0 ◦ = \dim V. \]

Let { w₁, w₂, … , wm } be a basis of W. Then it can be extended to a basis of V, say, β = { w₁, w₂, … , wm, wm+1, … , wn }. Now let β be a basis of V dual to β. Then for m + 1 ≤ kn, and 1 ≤ jm, wk(wj) = 0. This shows that wk(w) = 0 for all wW and m + 1 ≤ kn. Hence, { wm+1, … wn } ⊆ W⁰. Now we show that { wm+1, … wn } is a basis of W⁰.

This is a linearly independent subset with nm elements. Suppose that w is an arbitrary element from W⁰. Since wV⁰ and β is a basis of V⁰, we find that w = w(w₁) w + w(w₂) w + ⋯ + w(wn) wn. Since wjW for all 1 ≤ jm, we arrive at \( \displaystyle \quad {\bf w}^{\ast} = \sum_{i=m+1}^n {\bf w}^{\ast} \left( {\bf w}_i \right) {\bf w}_i^{\ast} . \quad \) This yields that { wm+1, … , wn } spans W⁰. Hence. dimW⁰ = nm = n − dimW.

   
Example 9: Let A be any rectangular array of coefficients for a linear system of m equations with n unknowns. Using row operations, we can find a reduced row-equivalent form of A, say AU. We have seen that their spans are the same, span(rows of A) = span(rows of U). Now, the nonzero rows of U form a system of generators of this space. Since they are independent, they constitute a basis of the row-space: \[ r = \dim\,\mbox{span}(\mbox{rows of } {\bf A}) . \] We know that the row reduction procedure is not unique and may lead to another triangular matrix. However, any row reduction leads to the same number of pivots and so the same number of linearly independent rows. This proves that the rank r (number of linearly independent rows) is independent of the particular method of reduction used to find it, and the row-rank of A is unambiguously defined by \[ \mbox{rank}({\bf A}) = \dim\,\mbox{span}(\mbox{rows of }{\bf A}) . \] The pivot columns of a matrix A are evident when A has been reduced only to echelon form. But, be careful to use the pivot columns of A itself for the basis of Col A . Row operations can change the column space of a matrix. The columns of an echelon form B of A are often not in the column space of A . For instance, the columns of matrix B in Example 8 all have zeros in their last entries, so they cannot span the column space of matrix A in Example ????    ■
End of Example 9

 

Complex Dimension


The dimension of a vector space V as the cardinality (i.e., the number of vectors) of a basis of V over its field 𝔽 was introduced by the German mathematician Georg Hamel (1877--1954). The dimension of the vector space V over the field 𝔽 is written as dim𝔽(V) read "dimension of V over 𝔽".

In mathematics, there are several varieties of dimensions to measure of the size of different objects. Although arbitrary fields are interesting and important in applications inside and outside of mathematics, we focus on two fields---real numbers (ℝ) and complex numbers (ℂ), partly because of their roles in physics and engineering. It is well known that real numbers are widely used in classical mechanics and geometry, complex numbers underlie the theories of electricity, magnetism, and quantum mechanics, all of which use linear algebra one way or another. Therefore, these two fields have a special place in linear algebra.

It should be noted that we use real and complex vector spaces only for theoretical analysis. In practice, we utilize only small part of these fields including rational numbers (ℚ) only because humans and computers do not operate with irrational numbers.

Recall that a complex number is a vector in ℝ² written in the form z = x + jy, with real components x, y ∈ ℝ. These real components are traditionally called real part and imaginary part of z, respectively. Notation x = Rez = ℜz and y = Imz = ℑz is widely used. The main reason of this form instead of standard form z = ix + jy is multiplication operation defined by \[ {\bf i}^2 = 1, \quad {\bf i}*{\bf j} = {\bf j}*{\bf i} = {\bf j} , \quad {\bf j}^2 = -1. \] Then we can drop i because it is just 1 for multiplication. Regular vector addition and multiplication just defined makes the field of complex numbers denoted by ℂ.

There is one more operation in the field of complex numbers---involution. There is no universal notation for complex conjugate---in mathematics it is denoted by overhead line, \( \displaystyle \overline{a + {\bf j}\,b} = a - {\bf j}\,b . \) In physics and engineering, the complex conjugate number is denoted by asterisk, \( \displaystyle (a + {\bf j}\,b)^{\ast} = a - {\bf j}\,b . \) In this tutorial, we mostly will follow the latter notation. For instance, \( \displaystyle (3 + 2\,{\bf j})^{\ast} = 3 - 2\,{\bf j} . \)

Theorem 7: If V is a complex vector space, then \[ \dim_{\mathbb{R}}V = 2\,\dim_{\mathbb{C}} V . \]

Let V be a complex vector space with basis β = {b1, b2, … , bn}. Correspondingly, we introduce another basis jβ = {j bi : bi ∈ β}. Note that multiplication by nonzero number every element from a basis produces another basis. We claim that S = β ∪ <>jβ is linear independent over ℝ.

Suppose b1, b2, … , bn, bn+1, … , bn+m are elements in S and that \[ c_1 {\bf b}_1 + c_2 {\bf b}_2 + \cdots + c_n {\bf b}_n + {\bf j}\,c_{n+1} {\bf b}_{n+1} + \cdots + {\bf j}\,c_{n+m} {\bf b}_{n+m} = {\bf 0} , \] for real coefficients cj. Since coefficients cj and jcj are also complex, we can re-index the basis elements bi if necessary to rewrite the previous equation in the form \[ z_1 {\bf b}_1 + z_2 {\bf b}_2 + \cdots + z_k {\bf b}_k = {\bf 0} , \] where zi = ci + jcn+i and b1, b2, … , bkn are distinct elements of the nasis β. Since vectors from β are linearly independent, the coefficients in the equation above are all zeroes, zi = 0. This implies that all coefficients ci are also zeroes. This is enough to prove our claim.

A similar argument shows that any linear combination of elements from β over ℂ can be written as a linear combination of elements in β ∪ jβ over ℝ. This proves that β ∪ jβ is a basis for V over ℝ. The Theorem 5 follows.

   
Example 10: Over the field of complex numbers, the vector space \( \mathbb{C} \) of all complex numbers has dimension 1 because its basis consists of one element \( \{ 1 \} . \)

Over the field of real numbers, the vector space \( \mathbb{C} \) of all complex numbers has dimension 2 because its basis consists of two elements \( \{ 1, {\bf j} \} . \)

In ℂ², we consider two vectors that generate a basis β = {(2, j), (j, −2)} ⊆ ℂ². To show their linearly independents, we consider \[ z_1 \left( 2, {\bf j} \right) + z_2 \left( {\bf j}, -2 \right) = \left( 0, 0 \right) = {\bf 0} , \] where z₁ and z₂ are some complex numbers. If this linear combination is equal to the zero vector, then its every component must be zero. We can solve the following system of equations over ℂ to find the coefficients \[ \begin{split} 2\,z_1 + z_2 {\bf j} &= 0, \\ z_1 {\bf j} - 2\,z_2 &= 0 . \end{split} \] Multiplying the first equation by j and second equation by −2 and adding the results, we obtain \[ 3\, z_2 = 0 \qquad \Longrightarrow \qquad z_2 = 0. \] Since it is a homogeneous equation, z₁ = 0. We check with Mathematica:

Solve[{2*z1 + I*z2 == 0, z1*I - 2*z2 == 0}, {z1, z2}]
{{z1 -> 0, z2 -> 0}}
This establishes that β is linearly independent. Since ℂ² is 2-dimensional over ℂ, β must be a basis for ℂ.

Now consider another basis jβ = {(2j, −1), (−1, −2j)}. Theorem 5 guarantees that β ∪ jβ is a basis for ℂ² as a vector space over ℝ. This means that we must be able to write arbitrary vector, for instance (3, 2), as a linear combination of elements in \[ \beta \cup {\bf j}\beta = \left\{ \left( 2, {\bf j} \right) , \left( {\bf j}, -2 \right) ,\left( 2{\bf j}, -1 \right) , \left( -1, -2{\bf j} \right) \right\} . \] Using real coefficients c>₁, c>₂, c>₃, c>₄, we get a relation \[ c_1 \left( 2, {\bf j} \right) + c_2 \left( {\bf j}, -2 \right) + c_3 \left( 2{\bf j}, -1 \right) + c_4 \left( -1, -2{\bf j} \right) = (3, 2) . \] We can find the coefficients by solving the following system of equations: \[ \begin{split} 2\, c_1 + c_2 {\bf j} + c_3 2{\bf j} - c_4 &= 3, \\ c_1 {\bf j} -2\,c_2 - c_3 -c_4 2{\bf j} &= 2 . \end{split} \] Separation of real and imaginary parts in both equations above yields \[ \begin{split} 2\, c_1 - c_4 &= 3, \\ c_2 + 2\,c_3 &= 0, \\ -2\,c_2 - c_3 &= 2, \\ c_1 -2\, c_4 &= 0. \end{split} \] Mathematica solves this system of equations in a blank of eye:

Solve[{2*c1 == c4 + 3, c2 + 2*c3 == 0, 2*c2 + c3 == -2, c1 - 2*c4 == 0}, {c1, c2, c3, c4}]
{{c1 -> 2, c2 -> -(4/3), c3 -> 2/3, c4 -> 1}}
\[ c_1 = 2, \quad c_2 = - \frac{4}{3} , \quad c_3 = \frac{2}{3}, \quad c_4 = 1. \]    ■
End of Example 10
Theorem 8: If a vector space V over a field 𝔽 has a basis containing m vectors, where m is a positive integer, then any set containing n vectors n > m in V is linearly dependent.
Suppose {v1, v2, … , vm} is a basis of V and let {b1, b2, … , bn} be any subset of V containing n vectors (n > m). Since {v1, v2, … , vm} spans V, there exist scalars ci,j ∈ 𝔽 such that \[ {\bf b}_i = \sum_{i=1}^m c_{i,j} {\bf v}_i , \] for each j so that 1 ≤ jn.

In order to show that {b1, b2, … , bn} is linearly dependent, we have to find k1, k2, … , kn ∈ 𝔽 not all zero such that \[ \sum_{j=1}^n k_j {\bf b}_j = {\bf 0} . \] This yields that \[ \sum_{j=1}^m k_j \left( \sum_{i=1}^m c_{i,j} {\bf v}_i \right) = {\bf 0} \qquad \iff \qquad \sum_{i=1}^m \left( \sum_{j=1}^n k_j c_{i,j} \right) {\bf v}_i = {\bf 0} . \] However, vectors {b1, b2, … , bn} form basis for V. The latter equation is valid when all coefficients are zeros. So \( \displaystyle \sum_{j=1}^n c_{i,j} k_J = 0 \) for each i such that 1 ≤ im. This is a system of m homogeneous linear equations in n unknowns. Hence, there exists nontrivial solution say p1, p2, … , pn. This ensures that there exist scalars p1, p2, … , pn, not all zero such that \( \displaystyle \sum_{j=1}^n p_j {\bf b}_j = {\bf 0} \) and hence the set {b1, b2, … , bn} is linearly dependent.

   
Example 11: The infinite set of monomials \( \beta = \left\{ 1, x, x^2 , \ldots , x^n , \ldots \right\} \) form a basis in the set of all polynomials.

Actually, we need to show the set β is linearly independent because β generates the set of all polynomials 𝔽[x]. Suppose opposite that there exists a finite subset S ⊂ β that is linearly dependent. Let xm be the highest power of x in S and let xn be the lowest power of x in S. Then there are scalars cn, … , cm not all zero, such that \[ c_n x^n + c_{n+1} x^{n+1} + \cdots + c_m x^m = 0 . \] Then the polynomial in the left-hand side is zero for all values of x. However, a polynomial of degree at most m cannot have more than m zeroes, but not identically. Therefore, this polynomial is identically zero and all coefficients are zeroes.

Another proof is more tedious. We can set x some values to be sure that the corresponding matrix is not singular (invertible). Then we get a homogeneous system of equations with invertible matrix that has only zero solution.

Let ℝE[x] and ℝO[x] be the sets of even and odd polynomials, respectively: \[ \mathbb{R}_{E}[x] = \left\{ p(x) \in \mathbb{R}[x]\, : \ p(-x) = p(x) \right\} \qquad \mbox{and} \qquad \mathbb{R}_{O}[x] = \left\{ p(x) \in \mathbb{R}[x]\, : \ p(-x) = -p(x) \right\} . \] We show that ℝE[x] is a subspace of ℝ[x] by checking the two closure properties:

  1. If p, q ∈ ℝE[x], then \[ \left( p + q \right) (-x) = p(-x) + q(-x) = p(x) + q(x) = \left( p + q \right) (x) . \] so p + q ∈ ℝE[x] too.
  2. If p ∈ ℝE[x] and λ ∈ ℝ, then \[ \left( \lambda\,p \right) (x) = \lambda\,p(x) = \lambda\left( p \right) (x) , \] so λp ∈ ℝE[x] too.

To find a basis of ℝE[x], we first notice that βE = { 1, x2, x4, … } ⊂ ℝE[x] because every element of βE is an even function. This set is also linearly independent since it is a subset of the linearly independent set β. To see that it spans ℝE[x], we notice that if \[ p(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \cdots \in \mathbb{R}_E , \] then p(x) + p(−x) = 2 p(x), so \begin{align*} 2\,p(x) = p(x) + p(-x) \\ &= \left( a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \cdots \right) + \left( a_0 - a_1 x + a_2 x^2 - a_3 x^3 + \cdots \right) \\ &= 2 \left( a_0 + a_2 x^2 + a_4 x^4 + \cdots \right) \in \mbox{span}\left( 1, x^2 , x^4 , \ldots \right) . \end{align*} It follows that p(x) = 𝑎₀ + 𝑎₂x2 + 𝑎₄x4 + ⋯ ∈ span(1, x2, x4, …), so the set { 1, x2, x4, … } spans ℝE[x], and is thus a basis of it.

A similar argument shows that ℝO[x] is a subspace of ℝ[x] with basis { x, x3, x5, … }.

You know from calculus that cosine function has an expansion \[ \cos x = 1 - \frac{x^2}{2} + \frac{x^4}{4!} - \frac{x^6}{6!} + \cdots = \sum_{n\ge 0} (-1)^n \frac{x^{2n}}{(2n)!} . \] Does it mean that cosine function belongs to ℝE[x] ? The answer is negative because the vector space ℝE[x] contains only linear combinations of finite terms. The cosine expansion contains infinite summation, but it is not defined in a vector space.    ■

End of Example 11

Infinite Dimensional Spaces

There is no restriction for using infinite dimensional spaces that are generated by infinite many linearly independent vectors. For example, the set V = ℭ(ℝ) of all continuous real-valued functions ℝ ↦ ℝ is a vector space. The zero function is the function that vanishes identically
\[ f = 0 \qquad \iff \qquad f(x) = 0 , \quad \forall x \in \mathbb{R} . \]
We add real-valued functions as follows
\[ \left( f + g \right) (x) = f(x) + g(x) , \quad \forall x \in \mathbb{R} . \]
Multiplication by a real number is defined similarly
\[ \left( k\,f \right) (x) = k\,f (x) , \qquad \forall x \in \mathbb{R} . \]
So functions f : ℝ ↦ ℝ may also be called vector, when considered as an element of the vector space ℭ(ℝ). Even its subspace ℭ(ℝ) of all infinitely differentiable functions is so huge that we don't know any basis for it. It has a subspace that you studied in second calculus course---the space of holomorphic (also analytic) functions. This is the class of functions that are represented by convergent power series
\[ f(x) \, \sim \,\sum_{n\ge 0} c_n x^n . \]
However, infinite summation of nonzero elements is not permitted in vector spaces, so this space of holomorphic functions is not suitable for our topic. The simplest, and probably also most natural infinite-dimensional space is the vector space of polynomials
A polynomial in one variable x is a finite sum of multiples of the monomials xn, n ∈ ℕ = {0, 1, 2, …}. The set of all polynomials in variable x is denoted by ℝ[x].
By definition, a polynomial is a linear combination of monomials xn:
\[ p(x) = \sum_{\mbox{finite}} a_j x^j , \qquad a_j \in \mathbb{R} . \]
Hence, integral powers of x constitute a basis in ℝ[x]:
\[ \left\{ x^0 = 1, x, x^2 , x^3 , \ldots , x^n , \ldots \right\} . \]
For any polynomial p(x) there is a maximal power in \( \displaystyle p(x) = \sum_{\mbox{finite}} a_j x^j \) that occurs with nonzero coefficient: This maximal power is the degree of p(x). Here are some polynomials
\[ 2\,x^5 - \pi, x^3 + 10^4 x^2 - 3.1415926\,x + e^2 , \qquad \frac{x^{n+1} -1}{x-1} = 1 + x + x^2 + \cdots + x^n . \]
However, the following expressions are not polynomials
\[ \frac{1}{1-x} = 1 + x + x^2 + \cdots , \quad |x| < 1; \qquad \ln (1-x) = x - \frac{x^2}{2} + \frac{x^3}{3} - \cdots = \sum_{n\ge 1} (-1)^{n+1} \frac{x^n}{n} . \]
   
Example 12: Consider the vector space c of all sequences a = (𝑎0, 𝑎1, 𝑎2, … , 𝑎n, …) of scalars. Addition of sequences is defined componentwise \[ {\bf a} + {\bf b} = \left( a_n \right)_{n\geqslant 0} + \left( b_n \right)_{n\geqslant 0} = \left( a_n + b_n \right)_{n\geqslant 0} . \] Multiplication of sequences by a scalar is similarly defined componentwise \[ k\,{\bf a} = k \left( a_n \right)_{n\geqslant 0} = \left( k\,a_n \right)_{n\geqslant 0} , \qquad k \in \mathbb{R} . \] This vector space c has a subspace, denoted by c₀, consisting of the sequences which are finally 0. The latter is isomorphic to the vector space of polynomials ℝ[x]. The particular sequences \[ {\bf e}_0 = \left( 1\ 0\ 0 \ 0 \ \cdots \right) , \quad {\bf e}_1 = \left( 0\ 1\ 0 \ 0 \ \cdots \right) , \quad {\bf e}_2 = \left( 0\ 0\ 1 \ 0 \ \cdots \right) , \quad \ldots , \] are linearly independent. More over, they constitute a basis in the space c₀, but not in c. Although there is a basis of the latter, but it is not possible to enumerate it explicitly.    ■
End of Example 12
The next example shows that it is not always possible to index a basis of an infinite-dimensional vector space with the set ℕ ={0, 1, 2, …} of natural numbers.
   
Example 13: A rational function f = p/q is the quotient of two polynomials p and q, where q # 0 is not the zero polynomial. We identify a polynomial p with the quotient p/l , and two quotients p₁/q₁ and p₂/q₂ are identified when pq₂ = qp₂.

Each rational function has a representation f = p/q in simplified form where the polynomials p and q are relatively prime (have no common factor). In this form, q is uniquely determined up to a multiplication by a nonzero constant, and the finite set of its zeroes is known as the set of poles Pj of f. The value f(x) = p(x)/q(x) is well-defined provided x is not in this set Pf so that f defines a map \[ \mathbb{R} \setminus P_f \,\mapsto \,\mathbb{R} \] having for domain the complement of its set of poles. Since the real (or complex) field is infinite, the common domain of two rational functions is also infinite and with the usual definition for the sum and multiplication by scalars, the set of real (or complex) rational functions is a vector space.    ■

End of Example 13

We conclude this section with a proof that every vector space has a basis. Since it is based on Zorn's lemma, we need to review the associated terminology.

Let S be a set with a binary relation that we denote by "≤." If a pair {𝑎, b} belongs to S, we abbreviate it as 𝑎 ≤ b. A relation ≤ is an ordering on S provided it is

  1. transitive, meaning that if 𝑎, b, c in S satisfy 𝑎 ≤ b and bcb, then 𝑎 ≤ c; and
  2. antisymmetric, meaning that if 𝑎, b in S satisfy 𝑎 ≤ b and b ≤ 𝑎, then 𝑎 = b.
In this case, (S, ≤) is a partially ordered set or a poset.    
Example 14: Let us consider a three element set A = {𝑎, b, c}. The set of all subsets of A consists of 2³ = 8 elements: \[ S = \left\{ \varnothing , \{a\}, \{b\}, \{c\}, \{a,b\}, \{a,c\} , \{b,c\}, \{a,b,c\} \right\} . \] Then (S, ⊆) with subset relation ⊆ is a poset. For instance, ∅ ⊆ {b} and {𝑎} ⊆ {𝑎, b}. Not every element from S can be compared (or being in a binary relation) with another element; say {𝑎} and {b} cannot be in relation ⊆.    ■
End of Example 14
A partial order in which any two elements are comparable, is called a total ordering on S provided 𝑎 ≤ b or b ≤ 𝑎 for every 𝑎, b in S.
A chain or chain-complete in a poset (S, ≤) is a totally ordered subset of S. Considering again the poset (S, ⊆) in Example 14, we can identify {∅, {𝑎}, {𝑎, b}, {𝑎, b, c}} as a chain. Every poset has chains. Any subset of a chain is also a chain. If CS is a chain and there is b in S such that a ≤ b for all 𝑎 in C, then b is an upper bound for C in S. For example, considering the chain {∅, {𝑎}} associated to Example 14, we see that both {𝑎} and {𝑎, b} are upper bounds. An upper bound for a chain is an element in the ambient poset; it need not be in the chain itself.

Zorn’s Lemma says that if every chain in a poset (S, ≤) has an upper bound in S, then S has a maximal element, that is, m in S such that if m ≤ 𝑎 for any 𝑎 in S, then 𝑎 = m. The poset in Example 14 has maximal element {𝑎, b, c}.

Theorem 9: Any linearly independent set in a vector space is a subset of a basis for the space.
Let V be a vector space and let S be a linearly independent subset of V. Let S be the collection of linearly independent sets in V that contain S as a subset. Since S belongs to S, S is nonempty and (S, ⊆) is a poset.

Let ₲ be a chain in S and let S₁ and S₂ be elements in ₲: S₁ and S₂ are linearly independent subsets of V, each containing S as a subset. As elements in a chain, either S₁ ⊆ S₂ or S₂ ⊆ S₁.

Let G be the union of the sets in ₲: G is a set of vectors, each of which belongs to a linearly independent set in a chain of linearly independent sets, each of which contains S as a subset.

We claim that G is linearly independent.

Suppose opposite that there are distinct v₁ , … , vk in G so that for some scalars c₁ , … , ck, we have \[ c_1 {\bf v}_1 + c_2 {\bf v}_2 + \cdots + c_k {\bf v}_k = {\bf 0} . \] As an element in G, each vi belongs to some Si in ₲. Since ₲ is a chain, we may assume that S₁ ⊆ S₂ ⊆ ⋯ ⊆ Sk. For i = 1, … , k, vi belongs to Sk. Since Sk is linearly independent, ci = 0 for i = 1, … , k. This is enough to establish that G is itself linearly independent, thus, that G belongs to ,I.S,/I., the collection of all linearly independent subsets of ,I.V,/I. containing S as a subset.

Since any element in ₲ is a subset of G, G is an upper bound for ₲. As every chain in S thus has an upper bound, Zorn’s Lemma guarantees the existence of a maximal element, β, in S. As a maximal linearly independent subset of V, β is a basis for V. Since S ⊆ β, the proof of the theorem is complete.

Theorem 10: Any spanning set for a vector space has a subset that is a basis for the space.
Let S be a spanning set for a vector space, V. Let S be the collection of linearly independent subsets of S. Now proceed as in the proof of the previous Theorem.

Since Theorem 9 holds in every vector space, we have established that every vector space has a basis.

Knowing that an infinite-dimensional vector space has a basis is often the best we can do. There are no algorithms for constructing bases in general.

The dimension of a vector space is the cardinality of any basis for the space.

  1. Let V be a finite dimensional vector space over a field 𝔽, and U₁, U₂, U₃ be any three subspaces of V. Then dim(U₁ + U₂ + U₃) ≤ dimU₁ + dimU₂ + dimU₃ − dim(U₁ ∩ U₂) − dim(U₂ ∩ U₃) − dim(U₁ ∩ U₃) + dim(U₁ ∩ U₂ ∩ U₃).
  2. Let ℂ, ℝ, and ℚ denote the field of complex numbers, real numbers and rational numbers, respectively. Show that
    1. ℂ is an infinite dimensional vector over ℚ.
    2. ℝ is an infinite dimensional vector over ℚ.
    3. The set {α + jβ, γ + jδ}, where j is the imaginary unit, j² = −1, is a basis of ℂ over ℝ if and only if αδ ≠ βγ. Hence, ℂ is a vector space of dimension 2 over ℝ.
  3. Show that the set of all polynomials ℝ[x] is a direct sum of polynomials of even degree ℝE[x] and odd degree ℝO[x].
  4. Let U = span{(1, 3, 2), (3, 2, -1), (1, 2, 1)} and W = span{(1, -3, 2), (2, -2, 4), (1, -2, 2)} be two subspaces of ℝ³. Determine the dimension and a basis for U + W, and UW. 7. Consider the following sum
  5. Let U = { (x₁, x₂, x₃, x₄) ∈ ℝ4 : x₂ + 2 x₃ + x₄ = 0 } and W = { (x₁, x₂, x₃, x₄) ∈ ℝ4 : x₁ + x₄ = 0, x₂ = 3 x₃} be subspaces of 4. Find bases and dimensions of UU, W, UW, and U + W.
  6. Let V = U + W for some finite dimensional subspaces U and W of V. If dimV = dinU + dimW, show that V = UW.
  7. Let u ∈ ℝ be a transcendent number. Let U be the set of real numbers which are of the type c0 + c1u + ⋯ + ckuk, ci ∈ ℚ, k ≥ 0. Prove that U is an infinite dimensional subspace of ℝ over ℚ.
  8. Assume that S = {v1, v2, … , vk} ⊆ V, where V is a vector space of dimension n. Answer True/False to the following:
    1. If S is a basis of V then k = n.
    2. If S is linearly independent then kn.
    3. If S spans V, then kn.
    4. If S is linearly independent and k = n, then S spans V.
    5. If S spans V and k = n, then S is a basis for V.
    6. If A is a 5 by 5 matrix and det(A) = 1, then the first 4 columns of A span a 4 dimensional subspace of ℝ5.
  9. Assume that V is a vector space of dimension n and S = {v1, v2, … , vk} ⊆ V. Answer True/False to the following:
    1. S is either a basis or contains redundant vectors.
    2. A linearly independent set contains no redundant vectors.
    3. If V = span{v1, v2, v3} and dim(V) = 2, then {v1, v2, v3} is a linearly dependent set.
    4. A set of vectors containing the zero vector is a linearly independent set.
    5. Every vector space is finite-dimensional.
    6. The set of vectors (j, 0), (0, j), (1, j) in ℂ² contains redundant vectors. Here j is the imaginary unit so j² = −1.
  10. Let p(x) = c₀ + cx + ⋯ + cm xm be a polynomial and A be an n×n matrix. Show that there exists a polynomial p(x) of degree at most n² for which p(A) = 0. Hint: use standard basis in the space ℝn×n. Compare your answer with the Cayley–Hamilton theorem.
  11. Show that a set of vectors {v1, v2, … , vn} in the vector space V is a basis if and only if it has no redundant vectors and dim(V) ≤ n.

  1. Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International