The Wolfram Mathematic notebook which contains the code that produces all the Mathematica output in this web page may be downloaded at this link. Caution: This notebook will evaluate, cell-by-cell, sequentially, from top to bottom. However, due to re-use of variable names in later evaluations, once subsequent code is evaluated prior code may not render properly. Returning to and re-evaluating the first Clear[ ] expression above the expression no longer working and evaluating from that point through to the expression solves this problem.

$Post := If[MatrixQ[#1], MatrixForm[#1], #1] & (* outputs matricies in MatrixForm*)
Remove[ "Global`*"] // Quiet (* remove all variables *)

All mathematicians believe that
Every vector space ≠ (0) has a basis because this concept makes the theory of vector spaces rich and productive. Although infinite, a vector space may have the property that all of its vectors can be built up from a fixed set of finitely many of its vectors using vector operations---we consider only such vector spaces in this tutorial. A minumal set of linearly independent vectors that generates the vector space is known as its basis. Reconstructing of a vector space from a basis is similar to catenation of words or strings from alphabet in a language, either human or computer programming. Even when basis is infinite, it contains all the information necessary to rebuild the vector space. While it is true that all finite-dimensional vector spaces have bases, it is much less clear what a basis of the infinite dimensional vector space would look like. Although we know how to construct bases in some infinite dimensional spaces, known as Hilbert spaces, we are not successful in the general case.

It turns out that the existence of bases in general vector space depends on the “axiom of choice” (there are known other equivalent statements such as, for instance, Zorn's lemma), which is a mathematical axiom that is independent of the other set-theoretic underpinnings of modern mathematics. In other words, we can neither prove that every vector space has a basis, nor can we construct a vector space that does not have one. From a practical point of view, this means that it is simply not possible to write down a basis of many vector spaces like ℭ(ℝ), the space of all continuous real-valued functions.

The concept of basis is crucial for Linear Algebra because it allows us to make transition from abstract vector space to Cartesian products 𝔽n and 𝔽m×n that are suitable for computer programming packages. Life is indeed easier if we accept the existence of bases, and the language of vector spaces is simplified if we accept the following version of the axiom of choice in the general form.

Postulate: Any independent subset A of a vector space V ≠ (0) may be completed into a basis of V. In particular, for any nonzero xV, there is a basis of V containing x.

 

Preliminaries


Since a basis of a vector space is a marriage of two other concepts---linear combinations and linearly independence, we refresh some basic ingredients regarding involving this topic.

Recall that a finite set of vectors {v1, v2, … , vn} in a vector space V is linearly dependent if there are scalars c1, c2, … , cn, at least one of which is not zero, such that

\[ c_1 {\bf v}_1 + c_2 {\bf v}_2 + \cdots + c_n {\bf v}_n = {\bf 0} . \]
A set of vectors that is not linearly dependent is said to be linearly independent, so for any finite number of vectors, if \( \displaystyle c_1 {\bf v}_1 + c_2 {\bf v}_2 + \cdots + c_n {\bf v}_n = {\bf 0} , \) it implies c1 = 0, c2 = 0, … , cn = 0. A set S of vectors in a vector space is linearly dependent if it contains finitely many linearly dependent vectors. A set of vectors that is not linearly dependent is said to be linearly independent.

Matrix test for linear independence: A set {x1, x2, … , xn} from the Cartesian product 𝔽n of n copies of field 𝔽 (either ℂ, complex numbers, or ℝ, real numbers, or ℚ, rational numbers) is linearly independent if and only if the matrix build from these vectors considered either as column vectors (∈𝔽n×1) or row vectors (∈𝔽1×n) is invertible (so its determinant is not zero).
====== to be finished =======

If we combine vectors x1, x2, … , xn into an n × n matrix A = [x1, x2, … , xn] writing them as columns (but you can also write in vectors as rows---does not matter) and use the vector v = = [v1, v2, … , vn] to form the homogeneous equation A v = 0, then we know that A is ivertible if and only if the equation A v = 0 has only the trivial solution. Since \[ {\bf A}\,{\bf v} = \left[ {\bf x}_1 \ {\bf x}_2 \ \cdots \ {\bf x}_n \right] \begin{pmatrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{pmatrix} = v_1 {\bf x}_1 + v_2 {\bf x}_2 + \cdots + v_n {\bf x}_n , \] this statement is equivalent to saying that b>A v = 0 has only the trivial solution if and only if the vectors b>x1, x2, … , xn are linearly independent.
Example 01: The function RandomInteger generates uniformly distributed random numbers. We apply this command to generate four vectors of length 4 with integer entries from the interval [0, 10]:
RandomInteger[10, {4, 4}]
{{6, 3, 5, 7}, {0, 7, 10, 4}, {1, 1, 7, 7}, {5, 9, 8, 4}}
Then we build a 4 × 4 matrix using this vectors as row vectors and then as columns. \[ {\bf A} = \begin{bmatrix} 6&3&5&7 \\ 0&7&10&4 \\ 1&1&7&7 \\ 5&9&8&4 \end{bmatrix} , \qquad {\bf B} = {\bf A}^{\mathrm T} = \begin{bmatrix} 6&0&1&5 \\ 3&7&1&9 \\ 5&10&7&8 \\ 7&4&7&4 \end{bmatrix} . \]
A = {{6, 3, 5, 7}, {0, 7, 10, 4}, {1, 1, 7, 7}, {5, 9, 8, 4}}
B = Transpose[A]
Now we check with Mathematica whether matrix A or B are invertible by evaluating their determinant:
Det[A]
0
Therefore, according to the Matrix test, these four vectors are linearly dependent.
End of Example 01
Here are some important facts regarding dependent and independent sets.
  1. The empty set ∅ is linearly independent.
  2. The set {0} is linearly dependent because λ0 = 0 for any scalar λ.
  3. Any set {v} containing a single nonzero vector is linearly independent since λv = 0 only when λ = 0.
  4. A set {u, v} containing two nonzero vectors is linearly independent if u is is not a scalar multiple of v.
  5. All vectors in a nonempty linearly independent set S are nonzero.
Theorem 02: A nonempty set of vectors S = {v1, v2, … , vn} is linearly dependent if and only if one of the vectors can be written as a linear combination of the remaining vectors.
Suppose that the set S is linearly dependent. Then there exists a set of scalars c1, c2, … , cn that are not all zero for which their linear combination vanishes. By suitably renumbering the vectors in S, we may assume that c₁ ≠ 0. Therefore, \[ c_1 {\bf v}_1 = - c_2 {\bf v}_2 - \cdots - c_n {\bf v}_n \qquad \Longrightarrow \qquad {\bf v}_1 = - \frac{c_2}{c_1}\,{\bf v}_2 - \cdots - \frac{c_n}{c_1}\,{\bf v}_n . \] Hence, v₁ is represented as a linear combination of the remaining vectors from S.

Conversely, suppose that v₁ can be written as a linear combination of the remaining vectors v₁ = b2v2 + ⋯ + bnvn. Then \[ {\bf 0} = 1 \cdot {\bf v}_1 - b_1 {\bf v}_2 - \cdots - b_n {\bf v}_n \] which shows that there is a linear combination of vectors from S is zero.

Example 02: Using Mathematica, we randomly (uniformly) generate five vectors of length four from the set of integers [0, 10]:
A = RandomInteger[10, {5, 4}]
{{7, 10, 0, 8}, {2, 0, 1, 10}, {1, 4, 3, 5}, {2, 0, 0, 1}, {3, 9, 1, 10}}
\[ {\bf v}_1 = \begin{pmatrix} 7 \\ 10 \\ 0 \\ 8 \end{pmatrix} , \quad {\bf v}_2 = \begin{pmatrix} 2 \\ 0 \\ 1 \\ 10 \end{pmatrix} , \quad {\bf v}_3 = \begin{pmatrix} 1 \\ 4 \\ 3 \\ 5 \end{pmatrix} , \quad {\bf v}_4 = \begin{pmatrix} 2 \\ 0 \\ 0 \\ 1 \end{pmatrix} , \quad {\bf v}_5 = \begin{pmatrix} 3 \\ 9 \\ 1 \\ 10 \end{pmatrix} . \] We organize these vectors in matrix form (as row and as columns): \[ {\bf A} = \begin{bmatrix} 7&10&0&8 \\ 2&0&1&10 \\ 1&4&3&5 \\ 2&0&0&1 \\ 3&9&1&10 \end{bmatrix} , \qquad {\bf B} = \begin{bmatrix} 7&2&1&2&3 \\ 10 &0&4&0&9 \\ 0&1&3&0&1 \\ 8&10&5&1&10 \end{bmatrix}. \] So matrix B = [v>₁ v>₂ v>₃ v>₄ v>₅] is built from column vectors while A uses row vectors. Now we extract 4 × 4 submatrix from B by eliminating the first column \[ {\bf BB} = \begin{bmatrix} 2&1&2&3 \\ 0&4&0&9 \\ 1&3&0&1 \\ 10&5&1&10 \end{bmatrix}. \]
BB = B[[1 ;; 4, 2 ;; 5]]
{{2, 1, 2, 3}, {0, 4, 0, 9}, {1, 3, 0, 1}, {10, 5, 1, 10}}
Since the determinant of matrix BB is not zero, its column vectors are linearly independent.
Det[BB]
-401
We show that vector v>₁ can be written as a linear combination of the remaining vectors: \[ {\bf v}_1 = c_2 {\bf v}_2 + c_3 {\bf v}_3 + c_4 {\bf v}_4 + c_5 {\bf v}_5 , \] with some scalars c₂, c₃, c₄, c₆. We rewrite this system of linear equations in matrix form: \[ {\bf v}_1 = {\bf BB}\,{\bf c} , \qquad {\bf c} = \begin{pmatrix} c_2 \\ c_3 \\ c_4 \\ c_5 \end{pmatrix} . \] Using Mathematica, we find the values of scalars: \[ {\bf v}_1 = -\frac{213}{401}\,{\bf v}_2 - \frac{91}{401}\,{\bf v}_3 + \frac{933}{401}\,{\bf v}_4 + \frac{486}{401}\,{\bf v}_5 . \]
Inverse[BB] . {7, 10, 0, 8}
{-(213/401), -(91/401), 933/401, 486/401}
End of Example 02

A linear combination of any number of vectors is an expression constructed by adding a finite number of vectors multiplied by some scalars. A collection of all finite linear combinations of vectors from the set S is a vector space called the span of S, denoted by span(S). A set S spans a vector space V (i.e., V is spanned by S or is generated by S) if every vector in V is a (finite) linear combination of vectors in S. We also say that S is a generator of vector space V = span(S). Therefore, any set S of vectors (finite of infinite---does not matter) generates a vector space, called span(S).

Theorem 03: Let set S = { v1, v2, … , vn } generate a vector space V, so span(S) = V ≠ { 0 }. Then there exists a subset of { v1, v2, … , vn }, consisting of linearly independent vectors that generates V.
We proceed algorithmically by steps.

Step one. By assumption, span(S) = V. If v1, v2, … , vn are linearly independent, then we have proved the statement. Otherwise, one of the vectors, say vn, is a linear combination of the others. By Theorem 1 of section, we have \[ V = \mbox{span}({\bf v}_1 , {\bf v}_2 , \ldots , {\bf v}_n ) = \mbox{span}({\bf v}_1 , {\bf v}_2 , \ldots , {\bf v}_{n-1} ) . \]

Step two. In step one, we have eliminated the vector vn from the set of generators of V, thus V = span(v1, v2, … , vn-1). If v1, v2, … , vn-1 are linearly independent, then we have finished our proof. Otherwise, we go back to step one, that is, one of the vectors v1, v2, … , vn-1 is a linear combination of the others. Again assuming that this is vector vn-1, we get \[ V = \mbox{span}({\bf v}_1 , {\bf v}_2 , \ldots , {\bf v}_n ) = \mbox{span}({\bf v}_1 , {\bf v}_2 , \ldots , {\bf v}_{n-2} ) . \] It is clear that, after a finite number of steps, n − 1 at most, we get a set in which no vector is a linear combination of the others. Therefore, the remaining set will be linearly independent.

Example 03: Most likely you are familiar with three vectors \[ {\bf i} = (1, 0, 0), \qquad {\bf j} = (0, 1, 0), \qquad {\bf k} = (0, 0, 1) \] that generate ℝ³ or ℚ³ or ℂ³, depending on what field of scalars is used. We consider another set of four vectors \[ S = \left\{ {\bf i}, \ {\bf j}, \ {\bf k}, \ {\bf v} \right\} , \] where v = (1, 1, 1) ∈ ℝ³. This set S is linearly dependent because v = i + j + k. You can eliminate any vector from S to obtain a linearly independent subset.
End of Example 03

A vector space is called finite-dimensional if it is generated by a finite set of vectors. A vector space that is not finite-dimensional is called infinite-dimensional.

Bases

  Recall that a set of vectors β is said to generate or span a vector space V if every element from V can be represented as a (finite) linear combination of vectors from β. Now we define a basis of a vector space as a minimal spanning set, by which we mean a spanning set such that any proper subset is not a spanning set. Usually we think of a basis as a set of vectors and the order in which we list them is convenient but not important at this moment. In mathematics, curly brackets ({ ⋯ }) are usually used for roster or enumeration notation of a set, where order does not matter. However, Mathematica uses this notation for ordered collections of entries, known as lists.

A subset β ⊂ V of a vector space V is said to be a basis for V if
  • β generates V, so span(β) = V;
  • β is linearly independent.     ▣

If β is a basis (does not matter finite or not) for vector space V, we also say that elements of β form or constitute a basis for V. This means that for every vector v from V, there exists a finite set of linearly independent vectors {b1, b2, … , bn} ⊂ V such that

\begin{equation} \label{EqBase.1} {\bf v} = c_1 {\bf b}_1 + c_2 {\bf b}_2 + \cdots + c_n {\bf b}_n , \end{equation}
for some scalars c1, c2, … , cnV. What makes a basis different from a spanning set is that representation (1) is unique as the following theorem shows.

Theorem 1 (Unique combination theorem): A subset β ⊂ V of a vector space V is a basis for V if and only if every vector from V can be written as a unique (finite) linear combination of vectors from β.
This theorem is sometimes used as the definition of a basis.  
Suppose that β is a basis of a vector space V. According to the first property of a basis, every vector vV can be w written as a finite linear combination of elements from β: \[ {\bf v} = c_1 {\bf b}_1 + c_2 {\bf b}_2 + \cdots + c_n {\bf b}_n , \] for some scalars c1, c2, … , cn ∈ 𝔽. If there exists another linear combination v = d1b1 + d2b2 + ⋯ + dnbn, then their difference yields 0 = (c1d1)b1 + (c2d2)b2 + ⋯ + (cndn)bn. Since the basis vectors {b1, b2, … , bn} are linearly independent, all coefficients in equation above must vanish, so c1d1 = 0, c2d2 = 0, … , cndn = 0.

Conversely, suppose that every xV can be written as a unique linear combination of vectors from the basis: x = c1b1 + c2b2 + ⋯ + cnbn. Then β obviously spans V.

In order to prove that elements from basis β are linearly independent, we consider a linear combination of its elements and equate it to zero vector: 0 = d1b1 + d2b2 + ⋯ + dmbm, for any finite number m ∈ ℕ. Then we have \begin{align*} {\bf 0} &= 0 \cdot {\bf b}_1 + 0 \cdot {\bf b}_2 + \cdots + 0\cdot {\bf b}_m \\ &= d_1 {\bf b}_1 + d_2 {\bf b}_2 + \cdots + d_m {\bf b}_m . \end{align*} Since representation of 0 as a linear combination of vectors in β is unique, it follows that d1 = d2 = ⋯ = dm = 0.

Example 1: The set β = {1, x, ², x³, … , xn, …} ls the standard basis of the polynomial space ℝ[x]. It is infinite dimensional space because it contains polynomial of any degree.
End of Example 1

Note that basis of a vector space need not be unique.

  We say also that V is finitely generated, if there exists a finite set of generators of V, i.e., V = span(v1, v2, … , vn), n ∈ ℕ. If V admits a basis of finite number of vectors, then it is finitely generated.  

Example 2: The span of the empty set \( \varnothing \) consists of a unique element, 0. Therefore, \( \varnothing \) is linearly independent and it is a basis for the trivial vector space consisting of the unique element---zero. Its dimension is zero.

For future consideration, it is convenient to adopt the following conventions:

  • The zero space is considered to be the span of the empty set: {0} = span(∅).
  • The empty set is considered to be linearly independent.
  • The set containing just the zero vector {0} is considered to be linearly dependent.
End of Example 2
Example 3: In the direct product 𝔽n = 𝔽 × 𝔽 × ⋯ × 𝔽 of n copies of the field (which is either ℚ or ℝ or ℂ), the vectors \( {\bf e}_1 = (1,0,0,\ldots , 0), \quad {\bf e}_2 = (0,1,0,\ldots , 0), \quad \ldots , {\bf e}_n = (0,0,\ldots , 0,1) \) are linearly independent. Therefore, they form a basis for n-dimensional vector space 𝔽n, and it is called the standard basis. Its dimension is n because its basis consists on n elements. Note that this set of vectors (e1, e2, … , en} is a basis for each of three vector spaces: ℚnn, and ℂn.

Using Kronecker delta \[ \delta_{i,j} = \begin{cases} 1, & \quad\mbox{if} \quad i=j , \\ 0, & \quad \mbox{otherwise}, \] we can define elements of standard basis in uniform way \[ {\bf e}_j = \left( \delta_{1,j}, \ \delta_{2,j}, \ \ldots , \ \delta_[n,j} \right) , \qquad j=1, 2, \ldots , n . \]

This set of vectors spans 𝔽n because every vector u = ( u1, u2, … , un) in 𝔽n can be uniquely expressed as

\[ {\bf u} = u_1 {\bf e}_1 + u_2 {\bf e}_2 + \cdots + u_n {\bf e}_n , \]
which is a linear combination of e1, e2, … , en.

Using standard basis, we can form another basis β = {b1, b2, … , bn}, where, for instance, \[ {\bf b}_1 = {\bf e}_1, \quad {\bf b}_2 = {\bf e}_1 + {\bf e}_2 = (1, 1, 0 , \ldots , 0), \quad \ldots , {\bf b}_n = {\bf e}_1 + {\bf e}_2 + \cdots + {\bf e}_n = (1, 1, \ldots , 1 ). \] Arbitrary vector from 𝔽n can be written as \begin{align*} {\bf u} &= \left( u_1 , u_2 , \ldots , u_n \right) \\ &= u_1 {\bf e}_1 + u_2 {\bf e}_2 + \cdots + u_n {\bf e}_n \\ &= u_1 {\bf b}_1 + \left( u_2 - u_1 \right) {\bf b}_2 + \cdots + \left( u_n - u_{n-1} \right) {\bf b}_n . \end{align*} We check with Mathematica for n = 4.

Solve[{b1 == e1, b2 == e1 + e2, b3 == e1 + e2 + e3, b4 == e1 + e2 + e3 + e4}, {e1, e2, e3, e4}]
{{e1 -> b1, e2 -> -b1 + b2, e3 -> -b2 + b3, e4 -> -b3 + b4}}
Since 𝔽n is isomorphic to the space of column-vectors 𝔽n×1, standard basis can be written in column form as well: \[ {\bf e}_1 = \left[ \begin{array}{c} 1 \\ 0 \\ \vdots \\ 0 \end{array} \right] , \quad {\bf e}_2 = \left[ \begin{array}{c} 0 \\ 1 \\ \vdots \\ 0 \end{array} \right] , \quad \cdots \quad {\bf e}_2 = \left[ \begin{array}{c} 0 \\ 0 \\ \vdots \\ 1 \end{array} \right] . \]
End of Example 3
  Let T = { v1, v2, … , vm } ⊂ V. We say that \( \displaystyle S = \left\{ {\bf v}_{k_1} , {\bf v}_{k_2} , \ldots , {\bf v}_{k_n} \right\} \subseteq T \) is a maximal linearly independent subset of T if S is linearly independent and if vi does not belong to S, then S ∪ { vi } is linearly dependent. So any set from T containing S is linearly dependent.

Theorem 2: Let S = { v1, v2, … , vn } be a finite set of vectors in a vector space V.
  1. Set S = { v1, v2, … , vn } is a basis of V if and only if it is a minimal set of generators of V.
  2. Set S = { v1, v2, … , vn } is a basis of V if and only if it is a maximal set of linearly independent vectors.
Part (i). If S = { v1, v2, … , vn } is a basis of V, then by definition it is a set of generators. We now see that it is also a minimal set with this property. In fact, if we remove any vector from S, then the vector space generated by the vectors changes. This happens because otherwise, a vector among { v1, v2, … , vn } would be a linear combination of the others, while we know that these vectors are linearly independent by hypothesis. Vice versa, if we consider a minimal set of generators, then it is a basis because it consists of linearly independent vectors. In fact, by minimality, we have that, by removing any of the generators, the remaining vectors do not generate the given vector space anymore, and therefore, this means that none of them is a linear combination of the other vectors.

Part (ii). If S is a basis of a vector space V, by definition it is a set of linearly independent vectors, and it is also maximal with respect to this property. Indeed, as v1, v2, … , vn generate V, we will have that if uV, then \[ \mbox{span}({\bf v}_1 , {\bf v}_2 , \ldots , {\bf v}_n ) = \mbox{span}({\bf v}_1 , {\bf v}_2 , \ldots , {\bf v}_n , {\bf u}) \in V , \] and hence u is necessarily a linear combination of S = { v1, v2, … , vn }. Therefore the vectors v1, v2, … , vn , u are linearly dependent.

Conversely, if S = { v1, v2, … , vn } is a maximal set of linearly independent vectors, if we add any other vector v, we get a linearly dependent set of vectors, that is, there are scalars λ, ^lambda;1, λ2, … , λn, not all equal to zero, such that \[ \lambda\,{\bf v} + \lambda_1 {\bf v}_1 + \lambda_2 {\bf v}_2 + \cdots + \lambda_n {\bf v}_n = {\bf 0} . \] We note that it must be λ ≠ 0, otherwise the vectors { v1, v2, … , vn } would be linearly dependent. Then we express v as a linear combination of these vectors: \[ {\bf v} = - \frac{\lambda_1}{\lambda} \, {\bf v}_1 - \frac{\lambda_2}{\lambda} \, {\bf v}_2 - \cdots - \frac{\lambda_n}{\lambda} \, {\bf v}_n . \] Hence, v ∈ span(b>v1, v2, … , vn). As we chose v arbitrarily, we have that the set S generates V.

 
Example 4: Let us consider the set of all m × n matrices over field 𝔽, denoted by 𝔽m×n. Recall that 𝔽 is either ℚ, the set of rational numbers, or ℝ, the set of real numbers, or ℂ, the complex numbers field. Independently of the field 𝔽, the vector space 𝔽m×n has a basis consisting of m·n matrices ei,j (1 ≤ im,   1 ≤ jn) denote the matrix whose only nonzero entry is a 1 in the i-th row and j-th column. Then the set of these matrices forms a basis for the vector space 𝔽m×n independently which field of scalars is used (either rational number ℚ or real number ℝ or complex numbers ℂ). Its dimension is mn. For example, the vector space 𝔽2×3 has six matrices that constitute a basis for it:

\[ {\bf e}_{1,1} = \begin{bmatrix} 1 & 0&0 \\ 0&0&0 \end{bmatrix} , \quad {\bf e}_{1,2} = \begin{bmatrix} 0&1&0 \\ 0&0&0 \end{bmatrix} , \quad {\bf e}_{1,3} = \begin{bmatrix} 0&0&1 \\ 0&0&0 \end{bmatrix} , \] \[ {\bf e}_{2,1} = \begin{bmatrix} 0&0&0 \\ 1&0&0 \end{bmatrix} , \quad {\bf e}_{2,2} = \begin{bmatrix} 0&0&0 \\ 0&1&0 \end{bmatrix} , \quad {\bf e}_{2,3} = \begin{bmatrix} 0&0&0 \\ 0&0&1 \end{bmatrix} , \] Then any matrix A ∈ 𝔽2×3 can be uniquely decomposed into linear combination of standard matrices: \[ {\bf A} = \begin{bmatrix} a_{1,1} & a_{1,2} & a_{1,3} \\ a_{2,1} & a_{2,2} & a_{2,3} \end{bmatrix} = a_{1,1} {\bf e}_{1,1} + a_{1,2} {\bf e}_{1,2} + a_{1,3} {\bf e}_{1,3} + a_{2,1} {\bf e}_{2,1} + a_{2,2} {\bf e}_{2,2} + a_{2,3} {\bf e}_{2,3} . \]

In particular, for square n by n matrices, we need n² basis matrices, for every entry. However, for symmetric matrices (A = AT), it is sufficient to keep only about half of them because entries below main diagonal are the same as above the diagonal. Therefore, basis for symmetric n×n matrices constitute n(n+1)/2 matrices with zeroes either below or above the main diagonal, and δi,j (Kronecker delta) entries otherwise.

End of Example 4

Example 5: The set of monomials \( S = \left\{ 1, x, x^2 , \ldots , x^n \right\} \) forms a basis in the set of all polynomials of degree up to n, denoted by ℝ≤n[x] (for simplicity, we consider only real valued polynomials). It has dimension n+1. We call S the standard basis for the vector space of polynomials ℝ≤n[x].

However, this standard basis of monomials S is not convenient in applications---there are many other bases available. For example, the set of Chebyshev polynomials of the second kind { Uk }, k = 0, 1, 2, … , n, constitutes a basis in ℝ≤n[x]. We present a few of them: \begin{align*} U_0 (x) &= 1 , \\ U_1 (x) &= 2\,x , \\ U_2 (x) &= 4\, x^2 -1 , \\ U_3 (x) &= 8\, x^3 - 4, x , \end{align*} and so on. These polynomials can be defined by the determinant: \[ U_n (x) = \det \begin{bmatrix} 2\,x & 1 & 0 & 0 & \cdots &0&0 \\ 1 & 2\,x & 1 & 0 & \cdots & 0&0 \\ 0&1& 2\, x & 1 & \cdots & 0&0 \\ 0&0&1& 2\,x & \cdots & 0&0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0&0&0&0& \cdots & 1&2\,x \end{bmatrix} , \qquad n=1,2,3,\ldots . \] For simplicity, we consider only the case of n = 3, and let β = { U₀, U₁, U₂, U₃ ) be the set of first four Chebyshev polynomials of the second kind. We start by checking whether or not span(U₀, U₁, U₂, U₃) = ℝ≤3[x]. That is, we determine whether or not an arbitrary polynomial 𝑎₀ + 𝑎₁x + 𝑎₂x² + 𝑎₃x³ can be written as a linear combination of the Chebyshev polynomials \[ a_0 + a_1 x + a_2 x^2 + a_3 x^3 = c_0 + c_1 2\,x + c_2 \left( 4\, x^2 -1 \right) + c_3 \left( 8\, x^3 - 4\,x \right) , \] for some real scalars c₀, c₁, c₂, c₃. By setting the coefficients of each power of x equal to each other, we arrive at the system of linear equations: \begin{align*} c_0 - c_2 &= a_0 , \\ 2\, c_1 - 4\,c_3 &= a_1 , \\ 4\, c_2 &= a_2 , \\ 8\, c_3 &= a_3 . \end{align*} Mathematica helps us to solve the system of equations above:

Solve[{c0 - c2 == a0, 2 c1 - 4 c3 == a1, 4 c2 == a2, 8 c3 == a3}, {c0, c1, c2, c3}]
{{c0 -> 1/4 (4 a0 + a2), c1 -> 1/4 (2 a1 + a3), c2 -> a2/4, c3 -> a3/8}}
Therefore, the coefficients c₀, c₁, c₂, c₃ are uniquely expressed via coefficients of the polynomial: \[ c_0 = a_0 + \frac{1}{4}\,a_2 , \quad c_1 = \frac{1}{2}\, a_1 + \frac{1}{4}\, c_3, \quad c_2 = \frac{1}{4}\, a_2 , \quad c_3 = \frac{1}{8}\, a_3 . \] It follows that an arbitrary polynomial of the third degree is a linear combination of Chebyshev polynomials. In particular, the elements of the standard basis of monomials are uniquely expressed through Chebyshev polynomials. Hence, we can claim that β3 is the basis in ℝ≤3[x].
End of Example 5


Steinitz exchange lemma: Let u1, u2, … , ur be a linearly independent set in the space V and let v1, v2, . … , vn be a basis of V. Then rn and we may substitute all of the ui’s for r of the vj’s in such a way that the resulting set of vectors is still a basis of V.

This lemma ws proved by the German mathematician (of Jewish descent) Ernst Steinitz (1871--1928).
Let’s do the substituting one step at a time. Start at k = 0. Now suppose that k < r and that we have relabeled the remaining vi’s so that \[ V = \mbox{span}\left\{ {\bf u}_1 , {\bf u}_2 , \ldots , {\bf u}_k , {\bf v}_1 , {\bf v}_2 , \ldots , {\bf v}_s \right\} , \] where k + s = n (the dimension of V) and { u1, u2, … , uk, v1, … , vs } is a basis of V.

We show how to substitute the next vector uk+1 into the basis and remove exactly one vj. We know that uk+1 is expressible uniquely as a linear combination of elements of the basis { u1, u2, … , uk, v1, … , vs }. Also, there have to be some vi’s left in such a combination if k < r. Otherwise, the set of vectors { u1, u2, … , uk } would not be linearly independent. Relabel the vj again so that bs ≠ 0 is the unique expression \[ {\bf u}_{k+1} = a_1 {\bf u}_1 + a_2 {\bf u}_2 + \cdots + a_k {\bf u}_k + b_1 {\bf v}_1 + b_2 {\bf v}_2 + \cdots + b_s {\bf v}_s , \] for uk+1. Thus, we can solve this equation to express vs uniquely in terms of { u1, u2, … , uk, v1, … , vs-1 }; otherwise, the expression for uk+1 is not unique. It follows that { u1, u2, … , uk, v1, … , vs-1 }; are also linearly independent, else the expression for vs is not unique. From these expressions, we see that \[ \mbox{span}\left\{ {\bf u}_1 , \ldots , {\bf u}_k , {\bf v}_1 , {\bf v}_2 , \ldots , {\bf v}_s \right\} = \mbox{span}\left\{ {\bf u}_1 , \ldots , {\bf u}_k , {\bf v}_1 , {\bf v}_2 , \ldots , {\bf v}_{s-1} \right\} \] Hence, we have accomplished the substitution of uk+1 into the basis by removal of a single vj and preserved the equality n = k + s = (k + 1)+(s − 1). Continue this process until k = r and we obtain the desired basis of V.

Example 6: Let us consider a vector v = (1 + j, 1 − j, 1) ∈ ℂ³, where j is the imaginary unit on ℂ, so j² = −1. We know that vector space ℂ³ has a standard basis consisting of three vectors \[ {\bf e}_1 = \left( 1, \ 0, \ 0 \right) , \qquad {\bf e}_2 = \left( 0, \ 1, \ 0 \right) , \qquad {\bf e}_3 = \left( 0, \ 0,\ 1 \right) . \] We check that the set β = {v, e₁, e₂} is a basis for ℂ³. So we need to show that any vector u = (&xi₁, &xi₂, &xi₃) can be represented uniquely as a linear combination of elements of β: \[ {\bf u} = \left( \xi_1 , \xi_2 , \xi_3 \right) = c_1 {\bf e}_1 + c_2 {\bf e}_2 + c_3 {\bf v} . \] This leads to the system of equations \[ \begin{split} \xi_1 &= c_1 + c_3 \left( 1 + {\bf j} \right) , \\ \xi_2 &= c_2 + c_3 \left( 1 - {\bf j} \right) , \\ \xi_3 &= c_3 . \end{split} \] Solving cor c₁, c₂, c₃, we obtain
Solve[{c1 + c3*(1 + I) == a1, c2 + c3*(1 - I) == a2, c3 == a3}, {c1, c2, c3}]
{{c1 -> a1 - (1 + I) a3, c2 -> a2 - (1 - I) a3, c3 -> a3}}
\[ c_1 = \xi_1 - \left( 1 + {\bf j} \right) \xi_3 , \qquad c_2 = \xi_2 - \left( 1 - {\bf j} \right) \xi_3 , \qquad c_3 = \xi_3 . \tag{6.1} \] Since matrix of this system of equations \[ {\bf A} = \begin{bmatrix} 1&0&1+{\bf j} \\ 0&1&1 - {\bf j} \\ 0&0&1 \end{bmatrix} \qquad \Longrightarrow \qquad \det{\bf A} = 1, \]
A = {{1, 0, 1 + I}, {0, 1, 1 - I}, {0, 0, 1}};
Det[A]
1
is not singular (its determinant is not zero), the system of equations has a unique solution, given by formula (6.1).
End of Example 6

 

Extending Linearly Independent Sets to a Basis


Bases are optimal subsets of vector spaces in the sense that they are large enough to span a space and yet small enough to be linearly independent.
Theorem 4: Let V = span(v1, v2, … , vn) be a finitely generated vector space, and S a linearly independent subset in V. Then there is a basis of V consisting of S and some generators.
We start with an independent subset SV. If it does not generate V, then at least one among the vj's is not in the subspace span(S) and we consider the independent set S1 = S ∪ {vj}. After at most n such adjunctions, we obtain rn a maximal independent set, hence a basis of V of the required form.
Example 7: Let us consider the basis in ℝ³: \[ {\bf v}_1 = \left( 1, 0, 2 \right) , \quad {\bf v}_2 = \left( 0, 1, 0 \right) , {\bf v}_3 = \left( 1, 1, 1 \right) . \] The easiest way to check their linearly independence is evaluate the determinant that is built from these three vectors:
Det[{{1, 0, 2}, {0, 1, 0}, {1, 1, 1}}]
-1
Since the denominator of the matrix \[ \det \begin{bmatrix} 1 & 0 ^ 2 \\ 0 & 1 & 0 \\ 1&1&1 \end{bmatrix} = -1 \ne 0, \] is not zero, these vectors are linearly independent. Suppose we are given the set of two standard vectors: \[ S = \left\{ {\bf e}_1 = \left( 1, 0, 0 \right) , \quad {\bf e}_2 = \left( 0, 1, 0 \right) \right\} . \] According to the previous theorem 6, we can add one of the vectors from the basis {v₁, v₂, v₃}. It turns out that we can use either v₁ or v₃. we check our conclusion with Mathematica:
Det[{{1, 0, 0}, {0, 1, 0}, {1, 1, 1}}]
2
and
Det[{{1, 0, 2}, {0, 1, 0}, {1, 1, 1}}]
1
End of Example 7
Corollary 1: Every finite-dimensional vector space has a basis.
Suppose that V is a finite-dimensional vector space with \[ V = \mbox{span}\left( {\bf v}_1 , \ {\bf v}_2 , \ \ldots , \ {\bf v}_n \right) . \] Now if the set {v1, v2, … ,vn} has a redundant vector in it, discard it and obtain a smaller spanning set of V. Continue discarding vectors until you reach a spanning set for V that has no redundant vectors in it. (Since you start with a finite set, this can’t go on indefinitely.) By the redundancy test, this spanning set must be linearly independent. Hence, it is a basis of V.

We give a constructive proof. If V = {0), then V is the empty span, and we are done.

  1. Suppose that V contains some non-zero vector. Pick a non-zero vector v1 in V. If V = span{v1}, we are done.
  2. Otherwise, pick a vector v2 in V that is not in span{v1}. If V = span{v1, v2}, we are done.
  3. Otherwise, pick a vector v3 in V that is not in span{v1, v2}. If V = span{v1, v2, v3}, we are done.
  4. Otherwise, pick a vector v4 in V that is not in span{v1, v2, v3}, and so on.
Continue in this way. Note that after the j-th step of this process, the vectors v1, v2, … , vj are linearly independent. This is because, by construction, no vector is in the span of the previous vectors, and therefore, no vector is redundant.
We consider ℂ4 generated by seven vectors \[ {\bf v}_1 = \left( 1, \ 0, \ 0, \ 0 \right) , \quad {\bf v}_2 = \left( 1, \ 1, \ 0, \ 0 \right) , \quad {\bf v}_3 = \left( 1, \ {\bf j}, \ 0, \ 0 \right) , \quad {\bf v}_4 = \left( 1, \ 1, \ 1, \ 0 \right) \] \[ {\bf v}_5 = \left( 1, \ 1, \ {\bf j}, \ 0 \right) , \quad {\bf v}_6 = \left( 1, \ {\bf j}, \ 1, \ 1 \right) , \quad {\bf v}_7 = \left( 1, \ {\bf j}, \ -1, \ {\bf j} \right) . \] Since vectors v₁ and v₂ are linearly independent, we start with the space S₂ = span{ v₁ and v₂}. We need to add another linearly independent vector. If we choose v₃, we need to check whether these three vectors are linearly independent. So we consider equation \[ c_1 \left( 1, \ 0, \ 0, \ 0 \right) + c_2 \left( 1, \ 1, \ 0, \ 0 \right) + c_3 \left( 1, \ {\bf j}, \ 0, \ 0 \right) = \left( 0, \ 0, \ 0, \ 0 \right) , \] with some complex coefficients c₁, c₂,c₃. Upon equating four coordinates to zero, we get system of equations \[ \begin{split} c_1 + c_2 + c_3 &= 0 , \\ c_2 + c_3 {\bf j} &= 0 . \end{split} \] With the aid of Mathematica we solve the system of equations above.
Solve[{c1 + c2 + c3 == 0, c2 + c3*I == 0}, {c1, c2}]
{{c1 -> (-1 + I) c3, c2 -> -I c3}}
Since this system has a non trivial solution, we discard vector v₃,, and consider v₄ instead. Since three vectors v₁, v₂, and v₄ are linearly independent, we form the space S₃ = {v₁, v₂, and v₄}. It turns out that vector v₅ linearly depends of vectors in S₃. Indeed, \[ c_1 \left( 1, \ 0, \ 0, \ 0 \right) + c_2 \left( 1, \ 1, \ 0, \ 0 \right) + c_3 \left( 1, \ 1, \ 1, \ 0 \right) + c_4 \left( 1, \ 1, \ {\bf j}, \ 0 \right) = \left( 0, \ 0, \ 0, \ 0 \right) . \] To find coefficients, we need to solve the system of equations (that follows by equating each coordinate to zero): \[ \begin{split} c_1 + c_2 + c_3 + c_4 &= 0 , \\ c_2 + c_3 + c_4 &= 0 , \\ c_3 + c_4 {\bf j} &= 0 . \end{split} \] Since we don't have time to solve this simple system of equations, we deligate this job to Mathematica.
Solve[{c1 + c2 + c3 + c4 == 0, c2 + c3 + c4 == 0, c3 + c4*I == 0}, {c1, c2, c3}]
{{c1 -> 0, c2 -> (-1 + I) c4, c3 -> -I c4}}
As we see, this system has a nontrivial solution, we claim that four vectors {v₁, v₂, v₄, v₅} are linearly dependent. Hence, we disregard vector v₅ and choose v₆ instead. We claim that the collection of vectors S₄ = {v₁, v₂, v₄, v₆} is linearly independent. Therefore, set S₄ is a basis for ℂ4. Note that we can replace v₆ with v₇ and still get the basis. This example also show that a basis for a vector space is not unique.
End of Example 8
Theorem 5: Let U a subspace of a finite vector space V. Then there exists a subspace W of V such that V = UW.
Let dimV = n and dimU = m. If U = {0}, then W = V and, on the other hand, if U = V, then W = {0}. Hence assume that neither U = {0} nor U = V.

When U is a proper subspace of V, its dimension m < n and, by assumption, m > 1. Let β = {b1, b2, … , bm} be a basis of U. Since β is linearly independent, it can be extended, according to Theorem 6, to a basis of V, say β = {b1, b2, … , bm, bm+1, … , bn}. Now β = β₁ ∪ β₂, with β₂ = {bm+1, … , bn} and β₁ ∩ β₂ = ∅. Let W be a vector space spanned by β₂. Since V and W are spanned by β and β₂, respectively, V = U + W and UW = {0}, i.e., V = UW.

Example 9: We consider a vector space of square matrices V = ℝn×n of dimension n. It is natural to split this vector space into sum of three subspaces, the upper triangular matrices, denoted by U, lower triangular matrices, denoted by L, and diagonal matrices, Λ. These three vector spaces are mutually exclusive: UL = U ∩ Λ = Λ ∩ L = (0}, so the vector space V is the direct sum of three subspaces: \[ \mathbb{R}^{n \times n} = L \oplus \Lambda \oplus U . \] Then any square matrix A can be uniquely splitted into sum of three matrices: A = L + Λ + U, where \[ {\bf L} = \begin{bmatrix} 0&0&0& \cdots &0 \\ \bullet & 0 & 0 & \cdots & 0 \\ \bullet & \bullet & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \bullet & \bullet & \bullet & \cdots & 0\end{bmatrix} , \quad \Lambda = \begin{bmatrix} \bullet & 0 & 0 \cdots & 0 \\ 0 & \bullet & 0 \cdots & 0 \\ 0&0&\bullet & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0&0&0& \cdots & \bullet \end{bmatrix} , \quad {\bf U} = \begin{bmatrix} 0 & \bullet & \bullet & \cdots & \bullet \\ 0&0& \bullet & \cdots & \bullet \\ 0&0&0& \cdots & \bullet \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0&0&0& \cdots & 0\end{bmatrix} . \] The basis for the vector space of lower triangular matrices can be chosen via matrices having only one nonzero entry at position (i, j>), with i > j>. There are total n(n−1)/2 such matrices. For example, when n = 3, we have three such matrices that constitute the basis for L: \[ \begin{bmatrix} 0&0&0 \\ 1&0&0 \\ 0&0&0 \end{bmatrix} , \quad \begin{bmatrix} 0&0&0 \\ 0&0&0 \\ 2&0&0 \end{bmatrix} , \quad \begin{bmatrix} 0&0&0 \\ 0&0&0 \\ 0&3&0 \end{bmatrix} . \] Of course, in place of this three nonzero numbers could be any nonzero numbers.
End of Example 9

 

Subspaces of 𝔽n


Although Corollary 1 provides a constructive proof how to determine a basis from the set of generators, we need a more efficient way for basis building because it is tedious to solve linear equations every time when we need finding linear dependency of vectors or prove their generating property. Therefore, we present two techniques for basis determination in a special case when vector space V is a subspace of the Cartesian product 𝔽n = 𝔽 × 𝔽 × ⋯ × 𝔽 of n copies of field 𝔽 (which is either ℚ, or ℝ, or ℂ). Elements of 𝔽n are n-tiples x = (x1, x2, … , xn) that can be easily transformed into matrix form as either row vectors from 𝔽1×n or column vectors from 𝔽n×1 because all these three spaces are isomorphic 𝔽n ≌ 𝔽1×n ≌ 𝔽n×1. Be aware that while all three vector representations (as n-tuples, row vectors, or column vectors) may look for you the same, computers disagree with humans and treat all three kinds of vectors differently.

There are a couple of reasons to consider subspaces of 𝔽n. First of all, you will learn in a special section that every finite-dimensional vector space over a field 𝔽 is essentially the same as some 𝔽n. Therefore, the majority of examples in this course involve subspaces of 𝔽n. Another great advantage of these subspaces is that corresponding vectors and matrices are naturally entered into computers and any software package has no problem operating with them. Actually not with every element from ℝn or ℂn because computers understand only finite entries from your 256-entry keyboard and even cannot operate with irrational numbers, but only with their finite approximations.

We have already seen that the Gaussian algorithm is an efficient tool for solving linear systems of equations. Now we are going to extend it for determination of basis' vectors directly, without having to set a linear system.

We observe that, if we have a matrix A ∈ 𝔽m×n (with m rows and n columns), we can treat its rows as n-vectors of 𝔽n, such vectors will be called row vectors of A. For example, if

\[ {\bf A} = \begin{bmatrix} 1&2&3 \\ 4&5&6 \end{bmatrix} , \]
its row vectors are r₁ = (1, 2, 3) and r₂ = (4, 5, 6). We formulate the following theorems only for a particular case of 𝔽 = ℝ in order to make our presentation friendly.
Theorem 6: For a given a matrix A ∈ ℝm×n, the elementary row operations do not change the subspace of ℝn generated by the row vectors of A.
Recall that the elementary row operations are:
  1. exchange of two rows;
  2. multiplying a row by a real number other than 0;
  3. replacing the i-th row with the sum of the i-th row and j-th multiplied by any real number.
It is immediate to verify that the statement is true for the type of operations (a) and (b). For operations of type (c), it is sufficient to show that if ri and rj are two row vectors of A and α ∈ ℝ, we have that span(ri , rj + αri) = span(ri , rj). We obviously have that ri, rj + αri ∈ span(ri , rj), so span(ri , rj) is subspace of ℝn, containing ri , rj + αri. Then because a span of any set S is the smallest subspace containing S, we have span(ri , rj + αri) ⊆ span(ri , rj).

The inclusion span(ri , rj) ⊆ span( span(ri , rj + αri) is shown in a similar way taking into account the fact that ri = (ri + αrj) − αrj. Thus, ri ∈ span{ri , rj + αri}.

Observation: The elementary row operations do not change the subspace of ℝn generated by the row vectors of matrix A, but they do change the subspace of ℝm generated by the column vectors of A.
Theorem 7: If a matrix A is transfered into row echelon form U, then its nonzero row vectors are linearly independent.
Let r1, r2, … , rk be the nonzero rows of A, and let 𝑎1, 𝑎2, … , 𝑎k be the corresponding pivots. Now we consider a linear combination of these rows λ1r1 + λ2r2 + ⋯ + λkrk = 0 and we want to prove that λ1 = λ2 = ⋯ + λk = 0. In the vector λ1r1 + λ2r2 + ⋯ + λnrn, the element in the position j1 is \( \displaystyle \lambda_1 a_{1, j_1} , \) the element in the position j2 is \( \displaystyle \lambda_1 a_{1, j_2} + \lambda_2 a_{2, j_2} , \) and so on, until we reach the element in position jk, which is \( \displaystyle \lambda_1 a_{1, j_k} + \lambda_2 a_{2, j_k} + \cdots + \lambda_k a_{k, j_k} . \) So, from the fact that λ1r1 + λ2r2 + ⋯ + λkrk = 0, it follows that: \[ \begin{cases} \lambda_1 a_{1, j_1} &= 0 , \\ \lambda_1 a_{1, j_2} + \lambda_2 a_{2, j_2} &= 0, \\ \qquad \vdots & \quad \vdots \\ \lambda_1 a_{1, j_k} + \lambda_2 a_{2, j_k} + \cdots + \lambda_k a_{k, j_k} &= 0 . \end{cases} \] Since \( \displaystyle a_{1, j_1} \ne 0 , \) from the first equation, we get λ1 = 0. Substituting λ1 = 0 into the second equation, and since \( \displaystyle a_{2, j_2} \ne 0 , \) we get tha λ2 = 0, and so on. After k steps we obtain λk = 0. Hence, &lambda'1 = &lambda'2 = ⋯ = &lambda'k = 0, and this shows that the rows r1, r2, … , rk are linearly independent.

Application of the ubiquitous row reduction procedure provides a way to construct a basis from a finite set of generators.

Algorithm for finding a basis of V = span{v1, v2, … , vn}:
  1. Build he matrix A = [v1, v2, … , vn] whose j-th column vector is vj.
  2. Row reduce A to row echelon form UA.
  3. The set of all vj such that the j-th column of U contains a pivot is a basis for V.
Example 10: Let β be a subset of ℝ4 given by the four vectors that are randomly generated by Mathematica:
A = RandomInteger[{-3. 3}, {4, 4}]
{{1, -2, -2, -2}, {0, 1, 2, -3}, {3, 3, 3, 2}, {3, 2, 2, 1}}
\[ {\bf w} = \left( 1, -2, -2, -2 \right) , \quad {\bf x} = \left( 0, 1, 2, -3 \right) , \quad {\bf y} = \left( 3, 3, 3, 2 \right) , \quad {\bf z} = \left( 3, 2, 2, 1 \right) . \] Since there are four vectors in β, if β is an independent set, then it will be a basis of ℝ4. By defining A to be the matrix whose rows are the four vectors from β, we can compute its reduced form UA. If the result contains four pivots, then β really is a basis. Here is A is explicitly written as \[ {\bf A} = \begin{bmatrix} 1 & -2 & -2 & -2 \\ 0&1&2&-3 \\ 3&3&3 2 \\ 3&2&2&1 \end{bmatrix} . \]
RowReduce[A] // MatrixForm
\( \displaystyle \begin{pmatrix} 1&0&0&0 \\ 0& 1&0&0 \\ 0&0&1&0 \\ 0&0&0&1 \end{pmatrix} \)
Since output is the iidentity matrix, the collection of vectors β is a basis. Then we can find unique coefficients c₁, c₂, c₃, and c₄ so that s = cw + cx + cy + cz for any vector s. We choose s = (1, 2, 3, 4) and find required scalars from the equation \[ \begin{bmatrix} c_1 \\ c_2 \\ c_3 \\ c_4 \end{bmatrix} = \left( {\bf A}^{\mathrm{T}} \right)^{-1} {\bf s} , \] where s is written as column. WE build the augmented matrix Au:
s = {1, 2, 3, 4}
Au = Join[Transpose[A], Transpose[{s}], 2]
\( \displaystyle \begin{pmatrix} 2&-1&3&1&1 \\ 3& 3&-2&-2&2 \\ 2&0&3&3&3 \\ 1&1&3&3&4 \end{pmatrix} \)
RowReduce[Au] // MatrixForm
\( \displaystyle \begin{pmatrix} 1&0&0&0& 19 \\ 0& 1&0&0&1 \\ 0&0&1&0& 51 \\ 0&0&0&1& 57 \end{pmatrix} \)
We check with Mathematica
(coeffVals = Inverse[Transpose[A]] . s) // MatrixForm
\( \displaystyle \begin{pmatrix} 19 \\ 1 \\ 51 \\ 57 \end{pmatrix} \)
Now we have \[ c_1 = 19 , \quad c_2 = 1 , \quad c_3 = 51 , \quad c_4 = 57 . \] Let us verify our solution with Mathematica
coeffVals[[1]]*w + coeffVals[[2]]*x + coeffVals[[3]]*y + coeffVals[[4]]*z
{1, 2, 3, 4}
Note that if we write the unknown scalars c₁, c₂, c₃, and c₄ as a row vector (matrix of size 1 × 4) instead, we end up with the equation s = (c₂, c₃, c₄) A. We can find unknown coefficients without a problem: \[ \left( c_1 , c_2 , c_3 , c_4 \right) = {\bf s}\,{\bf A}^{-1} . \] Let Mathematica work for us
A = {{1, -2, -2, -2}, {0, 1, 2, -3}, {3, 3, 3, 2}, {3, 2, 2, 1}};
s = {1, 2, 3, 4};
s.Inverse[A] // TraditionalForm
{19, 1, 51, -57}

 

We consider another matrix \[ {\bf B} = \begin{bmatrix} 2 & 3 & 2 & 1 \\ 0&1&2&-3 \\ 3&3&3& 2 \\ -1&1&1&-4 \end{bmatrix} . \] We apply row operation to reduce matrix B to upper traingula matrix. To achive this, we use the following subroutine written by Nasser M. Abbasi (https://12000.org/my_notes/rref/index.htm).

displayREF[Ain_?(MatrixQ[#] &), displayMat_ : True, normalizePivot_ : True] := Module[{multiplier, j, i, pivotRow, pivotCol, nRows, nCols, p, tmp, startAgain, A = Ain, n, m, pivotsFound = {}, keepEliminating, nIter, entry}, Print[MatrixForm[A]]; {nRows, nCols} = Dimensions[A]; keepEliminating = True; n = 1; m = 1; nIter = 0; While[keepEliminating, nIter++; If[nIter > 100,(*safe guard*) Return["Internal error. Something went wrong. Or very large \ system?", Module]]; If[m == nCols, keepEliminating = False, Print["Pivot is A(", n, ",", m, ")"]; If[displayMat, Print@makeNiceMatrix[A, {n, m}]]; If[A[[n, m]] =!= 0, If[normalizePivot, If[A[[n, m]] =!= 1, A[[n, All]] = A[[n, All]]/A[[n, m]]; A = Simplify[A]; Print["Making the pivot 1 using using row(", n, ")= row(", n, ")/A(", n, ",", m, ")"]]]; If[n < nRows, Do[If[A[[j, m]] =!= 0, multiplier = A[[j, m]]/A[[n, m]]; Print["Zeroing out element A(", j, ",", m, ") using row(", j, ")=", multiplier, "*row(", n, ")-row(", j, ")"]; A[[j, m ;;]] = A[[j, m ;;]] - multiplier*A[[n, m ;;]]; A = Simplify[A]; If[displayMat, Print@makeNiceMatrix[A, {n, m}]]], {j, n + 1, nRows}];]; pivotsFound = AppendTo[pivotsFound, {n, m}]; If[n == nRows, keepEliminating = False, n++; If[m < nCols, m++]], Print["Pivot is zero"]; If[n == nRows && m == nCols, keepEliminating = False,(*pivot is zero.If we can find non- zero pivot row below,then exchange rows*) If[n < nRows, p = FirstPosition[A[[n + 1 ;;, m]], _?(# =!= 0) &]; If[p === Missing["NotFound"] || Length[p] == 0, If[m < nCols, m++, keepEliminating = False],(*found non zero pivot below.Exchange rows*) tmp = A[[n, All]]; A[[n, All]] = A[[First[p] + n, All]]; A[[First[p] + n, All]] = tmp; A = Simplify[A]; Print["Exchanging row(", n, ") and row(", First[p] + n, ")"]; If[displayMat, Print@makeNiceMatrix[A, {n, m}]]], If[m < nCols, m++, keepEliminating = False]]]]]]; (*pivotsFound=DeleteDuplicates[pivotsFound];*) If[displayMat, Print@makeNiceMatrix[A, {n, m}]]; Print[">>>>>>Starting backward elimination phase. The pivots are ", pivotsFound]; Do[pivotRow = First@entry; pivotCol = Last@entry; If[pivotRow > 1, Do[If[A[[i, pivotCol]] =!= 0, Print["Zeroing out element A(", i, ",", pivotCol, ") using row(", i, ")=row(", i, ")-A(", i, ",", pivotCol, ")*row(", pivotRow, ")"]; A[[i, ;;]] = A[[i, ;;]] - A[[i, pivotCol]]*A[[pivotRow, ;;]]; A = Simplify[A]; If[displayMat, Print@makeNiceMatrix[A, {pivotRow, pivotCol}]]], {i, pivotRow - 1, 1, -1}]], {entry, pivotsFound}]; {A, pivotsFound[[All, 2]]}] makeSolutionSpecialCase[A_?(MatrixQ[#] &), b_?(VectorQ[#] &), pivotCols_List] := Module[{nRows, nCols, nLeadingVariables, nFreeVariables, n, m, k, variables = {}, eq, freeVariables, sol = {}}, Print["Pivot columns are ", MatrixForm[pivotCols]]; ClearAll[x, t];(*did not make them local, to prevent $ from showing in print*){nRows, nCols} = Dimensions[A]; nLeadingVariables = Length[pivotCols]; nFreeVariables = nCols - nLeadingVariables; Print["There are ", nLeadingVariables, " leading variables and ", nFreeVariables, " free variables. These are "]; Array[t, nFreeVariables]; Array[x, nCols]; m = 0; k = 0; Do[If[Not[MemberQ[pivotCols, n]], m++; Print[x[n], " is a free variable. Let ", x[n], "=", t[m]]; AppendTo[variables, t[m]]; AppendTo[sol, 0], Print[x[n], " is a leading variable"]; AppendTo[variables, x[n]]; AppendTo[sol, b[[++k]]]], {n, 1, nCols}]; freeVariables = (t[#] & /@ Range[nFreeVariables]); Print["Hence the system after RREF is the following>>>>>>"]; Print[MatrixForm[A . variables], "=", MatrixForm[b]]; Print["There is different solution for different value of the free \ variables."]; Print["Setting free variable ", freeVariables, " to zero gives"]; variables = variables /. ((t[#] -> 0) & /@ Range[nFreeVariables]); Print[MatrixForm[A . variables], "=", MatrixForm[b]]; Print["Therefore the final solution is "]; Print[MatrixForm[x[#] & /@ Range[nCols]], "=", MatrixForm[sol]]] makeNiceMatrix[mat_?MatrixQ, pivot_List] := Module[{g, nRow, nCol}, {nRow, nCol} = Dimensions[mat]; g = Grid[mat, Frame -> {None, None, {pivot -> True}}]; MatrixForm[{{g}}]] (*thanks to \ http://mathematica.stackexchange.com/questions/60613/how-to-add-a-\ vertical-line-to-a-matrix*) (*makes a dash line inside Matrix*) Format[matWithDiv[n_, opts : OptionsPattern[Grid]][m_?MatrixQ]] := MatrixForm[{{Grid[m, opts, Dividers -> {n -> {Red, Dashed}}]}}];
We apply displayREF to our matrix B:
B = {{2, 3, 2, 1}, {0,1,2,-3}, {3,3,3, 2}, {-1,1,1,-4}};
displaymat = True;
normalizePivot = False;
{result, pivots} = displayREF[B, displaymat, normalizePivot]
\( \displaystyle \begin{pmatrix} 2 & 3 & 2 & 1 \\ 0&1&2&-3 \\ 3&3&3& 2 \\ -1&1&1&-4 \end{pmatrix} \)
Pivot is A(1,1)
\( \displaystyle \begin{pmatrix} \fbox{2} & 3 & 2 & 1 \\ 0&1&2&-3 \\ 3&3&3& 2 \\ -1&1&1&-4 \end{pmatrix} \)
Zeroing out element A(3,1) using row(3)=3/2*row(1)-row(3)
\( \displaystyle \begin{pmatrix} \fbox{2} & 3 & 2 & 1 \\ 0&1&2&-3 \\ 0&-\frac{3}{2}&0& \frac{1}{2}2 \\ -1&1&1&-4 \end{pmatrix} \)
Zeroing out element A(4,1) using row(4)=-(1/2)*row(1)-row(4)
\( \displaystyle \begin{pmatrix} \fbox{2} & 3 & 2 & 1 \\ 0&1&2&-3 \\ 0&-\frac{3}{2}&0& \frac{1}{2} \\ 0&\frac{5}{2}&2&-\frac{7}{2} \end{pmatrix} \)
Pivot is A(2,2)
\( \displaystyle \begin{pmatrix} \fbox{2} & 3 & 2 & 1 \\ 0&\fbox{1}&2&-3 \\ 0&-\frac{3}{2}&0& \frac{1}{2}2 \\ 0&\frac{5}{2}&2&-\frac{7}{2} \end{pmatrix} \)
Zeroing out element A(3,2) using row(3)=-(3/2)*row(2)-row(3)
\( \displaystyle \begin{pmatrix} \fbox{2} & 3 & 2 & 1 \\ 0&\fbox{1}&2&-3 \\ 0&0&3& -4 \\ 0&\frac{5}{2}&2&-\frac{7}{2} \end{pmatrix} \)
Zeroing out element A(4,2) using row(4)=5/2*row(2)-row(4)
\( \displaystyle \begin{pmatrix} \fbox{2} & 3 & 2 & 1 \\ 0&\fbox{1}&2&-3 \\ 0&0&3& -4 \\ 0&0&-3&4 \end{pmatrix} \)
Pivot is A(3,3)
\( \displaystyle \begin{pmatrix} \fbox{2} & 3 & 2 & 1 \\ 0&\fbox{1}&2&-3 \\ 0&0&\fbox{3}& -4 \\ 0&0&-3&4 \end{pmatrix} \)
Zeroing out element A(4,3) using row(4)=-1*row(3)-row(4)
\( \displaystyle \begin{pmatrix} \fbox{2} & 3 & 2 & 1 \\ 0&\fbox{1}&2&-3 \\ 0&0&\fbox{3}& -4 \\ 0&0&0&0 \end{pmatrix} \)
\( \displaystyle \begin{pmatrix} \fbox{2} & 3 & 2 & 1 \\ 0&\fbox{1}&2&-3 \\ 0&0&\fbox{3}& -4 \\ 0&0&0&\fbox{0} \end{pmatrix} \)
>>>>>>Starting backward elimination phase. The pivots are {{1,1},{2,2},{3,3}}
Zeroing out element A(1,2) using row(1)=row(1)-A(1,2)*row(2)
Zeroing out element A(2,3) using row(2)=row(2)-A(2,3)*row(3)
Zeroing out element A(1,3) using row(1)=row(1)-A(1,3)*row(3)
{{{2, 0, 8, -6}, {0, 1, -4, 5}, {0, 0, 3, -4}, {0, 0, 0, 0}}, {1, 2, 3}}
This leads to three linearly independent vectors that constitute bases of the their span: \[ {\bf v}_1 = \left( 2, \ 0, \ 8,\ -6 \right) , \quad {\bf v}_2 = \left( 0,\ 1, \ -4, \ 5 \right) , \quad {\bf v}_3 = \left( 0,\ 0, \ 3, \ -4 \right) . \end{cases} \] We check with build-in Mathematica command
RowReduce[B]
{{1, 0, 0, 7/3}, {0, 1, 0, -(1/3)}, {0, 0, 1, -(4/3)}, {0, 0, 0, 0}}
So Mathematica provides another list of linearly independent vectors that also form abasis for the same space.
End of Example 10
 
Example 11: We are going to find a basis for the subspace V5 spanned by
v₁ = (1, -1, 0, 2, 3) v₂ = (2, 1, -2, 3, 1)
v₃ = (0, -3, 2, 1, 1) v₄ = (-1, 3, 2, -3, 4)
v₅ = (1, -9, 2, 5, 2) v₆ = (3, -8, 0, 8, -1)

Using Mathematica, we define these vectors and then build a matrix from these vectors.

v1 = {1, -1, 0, 2, 3}; v2 = {2, 1, -2, 3, 1}; v3 = {0, -3, 2, 1, 1}; v4 = {-1, 3, 2, -3, 4}; v5 = {1, -9, 2, 5, 2}; v6 = {3, -8, 0, 8, -1}
A = {v1, v2, v3, v4, v5, v6}
{{1, -1, 0, 2, 3}, {2, 1, -2, 3, 1}, {0, -3, 2, 1, 1}, {-1, 3, 2, -3, 4}, {1, -9, 2, 5, 2}, {3, -8, 0, 8, -1}}
\[ {\bf A} = \begin{bmatrix} 1& -1& 0& 2& 3 \\ 2& 1& -2& 3& 1 \\ 0& -3& 2& 1& 1 \\ -1& 3& 2& -3& 4 \\ 1& -9& 2& 5& 2 \\ 3& -8& 0& 8& -1 \end{bmatrix} . \]
End of Example 11
Invertible Matrix Theorem 8: Every invertible n × n matrix determines a basis for either 𝔽1×n or 𝔽n×1 by extracting either its rows or columns, respectively.
If we combine the vectors x1, x2, … , xn into an n × n matrix A = [x1 x2xn] and use the vector x = [c1, c2, … , cn] to form the homogeneous equation A x = 0, then we know that A is invertible if and only if the equation A x = 0 has only trivial solution. Since \[ {\bf A}\,{\bf x} = \left[ {\bf x}_1 \ {\bf x}_2 \ \cdots \ {\bf x}_n \right] \begin{pmatrix} c_1 \\ c_2 \\ \vdots \\ c_n \end{pmatrix} = c_1 {\bf x}_1 + \cdots + c_n {\bf x}_n , \] this statement is equivalent to saying that A x = 0 has only the trivial solution if and only if the vectors x1, x2, … , xn are linearly independent because its row echelon form contains n pivots. So its column vectors are linearly independent and form a basis for 𝔽n.

Example 12: We use Mathematica to randomly (uniformly) generate 4 × 4 matrix
A = RandomInteger[9, {4, 4}]
{{3, 2, 7, 5}, {9, 3, 9, 0}, {1, 5, 7, 4}, {0, 0, 4, 4}}
\[ {\bf A} = \begin{bmatrix} 3&2&7&5 \\ 9&3&9&0 \\ 1&5&7&4 \\ 0&0&4&4 \end{bmatrix} . \] Next, we check that matrix A is invertible
Det[A]
-240
Then rows of matrix A form a basis for 𝔽1×n (where 𝔽 ls either ℚ or ℝ or ℂ): \[ \left( 3, 2, 7, 5 \right) , \quad \left( 9, 3, 9, 0 \right) , \quad \left( 1, 5, 7, 4 \right) , \quad \left( 0, 0, 4, 4 \right) . \] Correspondingly, columns of matrix A constitute a basis for 𝔽n×1: \[ \begin{pmatrix} 3 \\ 9 \\ 1 \\ 0 \end{pmatrix} , \quad \begin{pmatrix} 2 \\ 3 \\ 5 \\ 0 \end{pmatrix} , \quad \begin{pmatrix} 7 \\ 9 \\ 7 \\ 3 \end{pmatrix} , \quad \begin{pmatrix} 5 \\ 0 \\ 4 \\ 4 \end{pmatrix} . \]
End of Example 12

The columns of every invertible (nonsingular) matrix give the basis for 𝔽n×1. The inverse statement is also true: The vectors b1, b2, … , bn constitute a basis for 𝔽n when the matrix formed from these vectors is invertible.

Theorem above is applicable only for subspaces of 𝔽n. In this case, we use indexed sets of vectors that are actually lists while definition of bases utilizes sets, which does not require ordering its elements. All software packages use lists, arrays, vectors or similar ordered objects, but not sets because every computer's datum has an address or label, which means indexing.

When the vectors are not n-tuples, the homogeneous equation c1x1 + c2x2 + ⋯ + cnxn = 0 usually cannot be written directly in the succinct form A x = 0 involving column vectors. In this case, you need either use the definition of linear dependence and Theorem 02 or wait till coordinatization section to transfer your problem into vector equation A x = 0.

If we consider the n × n identity matrix, then extracting its rows or column provides a standard basis

\[ {\bf e}_1 = \begin{pmatrix} 1 \\ 0 \\ \vdots \\ 0 \end{pmatrix} , \quad {\bf e}_2 = \begin{pmatrix} 0 \\ 1 \\ \vdots \\ 0 \end{pmatrix} , \quad \cdots \quad , {\bf e}_n = \begin{pmatrix} 0 \\ 0 \\ \vdots \\ 1 \end{pmatrix} \]

Mathematica has a dedicated command that allows a user to define standard basis vectors.

Table[KroneckerDelta[x, y], {x,3}, {y,3}] // MatrixFoem
\( \displaystyle \begin{pmatrix} 1&0&0 \\ 0& 1&0 \\ 0&0&1 \end{pmatrix} \)

 

  The next example demonstrates how Mathematica can determine the basis or set of linearly independent vectors from the given set. Note that basis is not unique and even changing the order of vectors, a software can provide you another set of linearly independent vectors.

Example 13: Suppose we are given four linearly dependent vectors: \[ (1, 2, 0, -3, 1, 0), \quad (1, 2, 2, -3, 1, 2), \quad (1, 2, 1, -3, 1, 1), \quad (3, 6, 1, -9, 4, 3) . \]
MatrixRank[m = {{1, 2, 0, -3, 1, 0}, {1, 2, 2, -3, 1, 2}, {1, 2, 1, -3, 1, 1}, {3, 6, 1, -9, 4, 3}}]
Out[1]= 3

Then each of the following scripts determine a subset of linearly independent vectors:

m[[ Flatten[ Position[#, Except[0, _?NumericQ], 1, 1]& /@
Last @ QRDecomposition @ Transpose @ m ] ]]
Out[2]= {{1, 2, 0, -3, 1, 0}, {1, 2, 2, -3, 1, 2}, {3, 6, 1, -9, 4, 3}}

or, using subroutine

MinimalSublist[x_List] :=
Module[{tm, ntm, ytm, mm = x}, {tm = RowReduce[mm] // Transpose,
ntm = MapIndexed[{#1, #2, Total[#1]} &, tm, {1}],
ytm = Cases[ntm, {___, ___, d_ /; d == 1}]};
Cases[ytm, {b_, {a_}, c_} :> mm[[All, a]]] // Transpose]

we apply it to our set of vectors.

m1 = {{1, 2, 0, -3, 1, 0}, {1, 2, 1, -3, 1, 2}, {1, 2, 0, -3, 2, 1}, {3, 6, 1, -9, 4, 3}};
MinimalSublist[m1]
Out[3]= {{1, 0, 1}, {1, 1, 1}, {1, 0, 2}, {3, 1, 4}}

In m1 you see 1 row and n columns together,so you can transpose it to see it as column
{{1, 1, 1, 3}, {0, 1, 0, 1}, {1, 1, 2, 4}}
One can use also the standard Mathematica command: Independence Test.
End of Example 13


 

Contracting Spanning Sets


Theorem 9: If a vector space V is generated by a finite set S, then some subset of S is a basis for V.
If \( S = \varnothing \ \mbox{or } \ S = \{ 0 \} , \) then \( V = \{ 0 \} \) and \( \varnothing \) is a subset of S that is a basis for V. Otherwise, S contains a nonzero element. By convention, we have that the empty set ∅ is linearly independent, and it is the basis of the vector space V = {0}.

By assumption, V ≠ {0} is generated by a finite set of vectors, so V = span(v1, v2, … , vn) ≠ {0}. Then, by Theorem 1, we have that there is a subset of {v1, v2, … , vn}, consisting of linearly independent vectors generating V, which is a basis of V.

Example 14: Let us consider a nonsingular matrix \[ {\bf A} = \begin{bmatrix} 1&2&3 \\ 4&5&6 \\ 1&3&2 \end{bmatrix} , \qquad\mbox{with} \quad \det{\bf A} = 6 \ne 0. \] We check with Mathematica:
A = {{1, 2, 3}, {3, 4, 5}, {1, 3, 2}}; Det[A]
6
Extracting its columns, we get three vectors \[ {\bf a}_1 = \left[ \begin{array}{c} 1 \\ 4 \\ 1 \end{array} \right] , \quad {\bf a}_2 = \left[ \begin{array}{c} 2 \\ 5 \\ 3 \end{array} \right] , \quad {\bf a}_3 = \left[ \begin{array}{c} 3 \\ 6 \\ 2 \end{array} \right] . \] These vectors are linearly independent because the determinant of the matrix built from these vectors is not zero. Therefore, they generate the vector space 𝔽3×1 ≌ 𝔽³.

On the other hand, the following matrix \[ {\bf B} = \begin{bmatrix} 1&2&3 \\ 4&5&6 \\ 4&3&2 \end{bmatrix} . \] is singular.

B = {1, 2, 3}, {3, 4, 5}, {4, 3, 2}}; Det[B]
0
Since matrix B has only two linearly independent columns, \[ {\bf b}_1 = \left[ \begin{array}{c} 1 \\ 4 \\ 4 \end{array} \right] , \quad {\bf b}_2 = \left[ \begin{array}{c} 2 \\ 5 \\ 3 \end{array} \right] , \] these two column vectors b₁ and b₂ form a basis for a two dimensional vector space spanned on these vectors: \[ S = \left\{ {\bf w} = c_1 \left[ \begin{array}{c} 1 \\ 4 \\ 4 \end{array} \right] + c_2 \left[ \begin{array}{c} 2 \\ 5 \\ 3 \end{array} \right] \, : \ c_1 , c_2 \in \mathbb{F} \right\} . \]
End of Example 14
This example motivates for the following observation.

  1. Determine which of the following statements are true and which are false.
    1. The empty set is a vector space.
    2. If A and B are subsets of a vector space V with AB, then span(A) ⊆ span(B),
    3. If A and B are subsets of a vector space V with span(A) ⊆ span(B), then AB,
    4. Every vector space V ≠ {0} contains a subspace U such that UV.
    5. Linear combinations must contain only finite many terms in the sum.
    6. Bases must contain finitely many vectors.
    7. A set containing a single vector must be linearly independen.
  2. Find all possible subsets of the following sets of vectors that form a basis of ℝ2×2.
    1. \( \displaystyle \begin{bmatrix} 1&0 \\ 2& -3 \end{bmatrix} , \quad \begin{bmatrix} 1&2 \\ 0&-3 \end{bmatrix} , \quad \begin{bmatrix} -1&0 \\ 0&2 \end{bmatrix} , \quad \begin{bmatrix} 0&-1 \\ 3&0 \end{bmatrix} ; \)
    2. \( \displaystyle \begin{bmatrix} 1&2 \\ 2&4 \end{bmatrix} , \quad \begin{bmatrix} 1&2 \\ 3&6 \end{bmatrix} , \quad \begin{bmatrix} 1&2 \\ 4&8 \end{bmatrix} ; \)
    3. \( \displaystyle \begin{bmatrix} 1&2 \\ 3&4 \end{bmatrix} , \quad \begin{bmatrix} 1&-2 \\ 3&-4 \end{bmatrix} , \quad \begin{bmatrix} 2&1 \\ 4&3 \end{bmatrix} \)
    4. \( \displaystyle \begin{bmatrix} 1&1 \\ 0&0 \end{bmatrix} , \quad \begin{bmatrix} 0&0 \\ 1&1 \end{bmatrix} , \quad \begin{bmatrix} 3&1 \\ 0&0 \end{bmatrix} , \quad \begin{bmatrix} 3&1 \\ 0&0 \end{bmatrix} . \)
  3. Find all possible subsets of the following sets of vectors that form a basis of ℝ³.
    1. (1, 0, 2),    (1, −2, 3),    (2, 1, 3);
    2. (−2, 3, −1),    (4, −3, −2),    (0, 3, 2),    (1, 2, −1);
    3. (1, 2, 1),    (2, 1, 1),    (3, 1, 1),    (2, 0, 1) .
  4. Let u₁ = (0, 0, 1),    u₂ = (1, 2, 3),    v₁ = (1, 4, 1),    v₂ = (2, −1, 1),    v₃ = (1, 1, 0). The set v₁,v₂, v₃ is a basis of ℝ³. Determine which vj’s could be replaced by u₁, and which vj’s could be replaced by both u₁ and u₂, while retaining the basis property.
  5. In the space of polynomials of degree up to 2, V = ℝ≤2[x], let u₁ = x,    u₂ = x²,    v₁ = 2 −x,    v₂ = 3 + x,    v₃ = 1 −x² is a basis in V. Determine which vj’s could be replaced by u₁, and which vj’s could be replaced by both u₁ and u₂, while retaining the basis property.
  6. Let u₁ = (1, 1, 1). Expand {u₁} to a basis of ℝ³.
  7. To determine a basis in ℝ³, find the redundant vectors, if any, in the following lists
    1. (2, 0, −1), (−1, 2, 1), (1, 1, 1), (3, 2, 1);
    2. (1, 2, 3), (2, 1, −1), (3, 3, 1), (5, 0, 1);
    3. (−1, 0), (1, 2, 0), (−1, 1, −2), (3, 3, 2);
    4. (4, 5, −2), (3, −2, 1), (0, 1, 1), (1, −3, 2).

 

  1. Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International
  2. Beezer, R.A., A First Course in Linear Algebra, 2017.
  3. Fitzpatrick, S., Linear Algebra: A second course, featuring proofs and Python. 2023.