When solving systems of linear equations, it is convenient to rewrite them in vector form A x = b for some m × n. Then the matrix A considered as a linear transformation (or operator) mapping column n-vectors into column m-vectors, so A : 𝔽n×1 ⇾ 𝔽m×1. From this point of view, matrix A acts on vectors from left to right, which defines a linear transformation
\[ T_A \,: \ \mathbb{F}^n \mapsto \mathbb{F}^m , \]
where 𝔽 is a field of numbers, either complex ℂ, or real ℝ, or rational ℚ. In order to make our presentation friendly, we mostly use the field of real numbers. The direct product 𝔽n is isomorphic to column vector space and to row vector space: 𝔽n ≌ 𝔽n×1. Therefore, we can consider m × n matrix A acting from right to left on row m-vectors as y A, generating the linear map:
\[ \mathbb{F}^{1\times n} \leftarrow \,\mathbb{F}^{1\times m} \ : \,{\bf A} . \]
However, we mostly apply matrices from left on column vectors, and only occasionally consider the dual situation when matrices act from right on row vectors.

Matrix spaces

Row space

Null space

Dimension Theorems

Four subspaces

Range or Column Space

When we solve a linear system A x = b for m×n matrix A, its dimensions do not truly describe the solution set. The first step to understand this set is to show that the solution set for any linear system is actually a vector space. Recall that ℝn consists of all n-tuples that we represent as column vectors:

\[ {\bf x} = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix} \in \mathbb{R}^{n\times 1}, \]
with some real numbers x1, x2, … , xn. If one wants to deal with the field of complex numbers, then we use ℂn that is similar to ℝn with entries x1, x2, ... , xn taken as complex numbers. When m×n matrix
\[ {\bf A} = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots& \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{bmatrix} = \left[ a_{ij} \right] \]
acts on n-column vector x ∈ 𝔽n×1, we obtain another m-column vector
\[ {\bf A}\,{\bf x} = x_1 \begin{bmatrix} a_{11} \\ \vdots \\ a_{m1} \end{bmatrix} + x_2 \begin{bmatrix} a_{12} \\ \vdots \\ a_{m2} \end{bmatrix} + \cdots + x_n \begin{bmatrix} a_{1n} \\ \vdots \\ a_{mn} \end{bmatrix} . \]
Matrix A can be partitioned according to its columns: \( {\bf A} = \left[ {\bf c}_1 \ {\bf c}_2 , \ \ldots , \ {\bf c}_n \right] , \) where each column is an m-vector
\[ {\bf c}_1 = \begin{bmatrix} a_{11} \\ \vdots \\ a_{m1} \end{bmatrix} , \quad {\bf c}_2 = \begin{bmatrix} a_{12} \\ \vdots \\ a_{m2} \end{bmatrix} , \quad \cdots \quad , \quad {\bf c}_n = \begin{bmatrix} a_{1n} \\ \vdots \\ a_{mn} \end{bmatrix} . \]
Therefore, the result of action of matrix A on a n-vector x is a linear combination of matrix columns:
\begin{equation} \label{EqRange.1} {\bf A} \,{\bf x} = x_1 {\bf c}_1 + x_2 {\bf c}_2 + \cdots + x_n {\bf c}_n . \end{equation}
The coefficients are the entries of x. So applying A to all possible n-column vectors x, we obtain all possible linear combinations of columns of matrix A. Such set is a span of all columns of matrix A and it is a vector space embedded into ℝn or ℂn depending what scalars are used. Recall that a set of vectors β is said to generate or span a vector space V if every element from V can be represented as a linear combination of vectors from β.
The range (also called the column space or image) of a m × n matrix A is the span (set of all possible linear combinations) of its column vectors. The column space of a matrix is the image or range of the corresponding matrix transformation. We will denote it as Range(A). So it is a subspace of ℝm in case of real entries or ℂm when matrix A has complex entries.
Note that we have several alternatives to label the same object---range in our case. This is usually the case when two topics (matrix theory and vector spaces and their transformations) overlap,
The dimension of the column space is called the column rank of matrix A. This corresponds to the maximal number of linearly independent columns of A. A matrix is said to have full column rank if its columns are linearly independent.

Since \( {\bf x}^{\mathrm T} {\bf A}^{\mathrm T} = \left( {\bf A}\, {\bf x} \right)^{\mathrm T} , \) the column space of a matrtix A equals the row space of its transpose matrix AT (or, in general, its adjoint \( {\bf A}^{\ast} = \overline{\bf A}^{\mathrm T} \) ), so a basis can be computed by reducing AT to row-echelon form. However, this is not the best way.

Example 1: Consider the matrix
\[ {\bf A} = \begin{bmatrix} 0&1 \\ 2&3 \\ 4&5 \end{bmatrix} = \left[ {\bf c}_1 , {\bf c}_2 \right] , \quad\mbox{where} \quad {\bf c}_1 = \begin{bmatrix} 0 \\ 2 \\ 4 \end{bmatrix} , \quad {\bf c}_2 = \begin{bmatrix} 1 \\ 3 \\ 5 \end{bmatrix} . \]
Its range consists of all combinations of the two columns---any x1 times the first column plus any x2 times the second column:
\[ {\bf A} \, {\bf x} = \begin{bmatrix} 0&1 \\ 2&3 \\ 4&5 \end{bmatrix} \, \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = x_1 \begin{bmatrix} 0 \\ 2 \\ 4 \end{bmatrix} + x_2 \begin{bmatrix} 1 \\ 3 \\ 5 \end{bmatrix} = x_1 {\bf c}_1 + x_2 {\bf c}_2 . \]
Those combinations fill up a plane in ℝ3. If the right side b lies on that plane, then it is one of the combinations and x = (x1, x2) is a solution to Ax = b. If b is not in the column space, then there is no solution to our 3 equations in 2 unknowns.
End of Example 1
Example 2: Let
\[ {\bf A} = \begin{bmatrix} 5&2&1 \\ 0&0&3 \end{bmatrix} = \left[ {\bf c}_1 , {\bf c}_2 , {\bf c}_3 \right] , \quad\mbox{where} \quad {\bf c}_1 = \begin{bmatrix} 5 \\ 0 \end{bmatrix} , \quad {\bf c}_2 = \begin{bmatrix} 2 \\ 0 \end{bmatrix} , \quad {\bf c}_3 = \begin{bmatrix} 1 \\ 3 \end{bmatrix} . \]
Any 2-vector b = (b1 , b2)T can be expressed as a linear combination of columns c1 and c3:
\[ {\bf A}\, {\bf x} = x_1 \begin{bmatrix} 5 \\ 0 \end{bmatrix} + x_2 \begin{bmatrix} 2 \\ 0 \end{bmatrix} + x_3 \begin{bmatrix} 1 \\ 3 \end{bmatrix} = \begin{bmatrix} b_1 \\ b_2 \end{bmatrix} \]
if we choose
\[ x_1 = \frac{b_1}{5} - \frac{b_2}{15} , \quad x_2 =0 , \quad x_3 = \frac{b_2}{3} . \]
End of Example 2

Observation: If A is an m×n matrix, with columns c1, c2, ... , cn, and b is in ℝm, the vector equation

\[ {\bf A}\, {\bf x} = {\bf b} \]
has the same solution set as the vector equation
\[ x_1 {\bf c}_1 + x_2 {\bf c}_2 + \cdots + x_n {\bf c}_n = {\bf b} \]
which, in tern, has the same solution set as the system of linear equations whose augmented matrix is
\[ \left[ {\bf A} \,\big\vert \, {\bf b} \right] = \left[ {\bf c}_1 \ {\bf c}_2 \ \cdots \ {\bf c}_n \ {\bf b} \right] . \]
Equation (1) shows that the column space of the matrix A can be thought of as the set of all possible products A x, i.e.,
\[ \mbox{Column space}({\bf A}) = \left\{ {\bf A}\,{\bf x} \,\vert\ {\bf x} \in \mathbb{R}^{n\times 1} \right\} . \]
An insight that follows from the formula above shows that the linear combination of columns of A ∈ F&opf;m×n with coefficients from the vector x ∈ F&opf;n×1 is zero exactly when x belongs to the null space (or kernel) of A. Thus, we can use null space calculations to identify redundant vectors in a set of column vectors, as in the next example.
Example 3?: Applied Linear Algebra and Matrix Analysis by THomas Shores Example 3.32, page 224 Make 3x4 matrix
End of Example 3?

Theorem 1: The system A x = b is consistent if and only if b is in the range (column space) of A.

When b is in the range of A, it is a combination of the columns. The coefficients in that combination give us a solution x to the system Ax = b.
When the above linear system is consistent, then coefficients of x are exactly coefficients in the linear combination for b expressed as via columns of A.
Example 3: Let us pick up a 4×4 matrix with random integer entries:
mat = RandomInteger[10, {4, 4}]
{{9, 6, 1, 0}, {1, 8, 5, 8}, {1, 4, 8, 8}, {0, 5, 4, 6}}
Note that when you execute the above Mathematica command, you most likely will get different random matrix. As usual, we choose the first pivot as the entry "9" in the upper left Conner. Your software will also pick the same pivot because it is the largest entry in the first column. To eliminate entries to the right of the pivot, we multiply the random matrix by the elementary matrix from the right:
e1 = {{1, -2/3, -1/9, 0}, {0, 1, 0, 0}, {0, 0, 1, 0}, {0, 0, 0, 1}} ;
A1 = mat.e1
{{9, 0, 0, 0}, {1, 22/3, 44/9, 8}, {1, 10/3, 71/9, 8}, {0, 5, 4, 6}}
This gives
\[ {\bf A}_1 = \begin{bmatrix} 9&0&0&0 \\ 1& \frac{22}{3} & \frac{44}{9}& 8 \\ 1&\frac{10}{3}& \frac{71}{9} & 8 \\ 0&5&4&6 \end{bmatrix} . \]
Our next pivot would be "22/3"; to kill entries to its right, we multiply the latter by the elementary matrix:
e2 = {{1, 0, 0, 0}, {0, 1, -2/3, -12/11}, {0, 0, 1, 0}, {0, 0, 0, 1}};
A2 = A1.e2
{{9, 0, 0, 0}, {1, 22/3, 0, 0}, {1, 10/3, 17/3, 48/11}, {0, 5, 2/3, 6/ 11}}
Finally, we choose the next pivot from the third diagonal entry and multiply the latter by the elementary matrix from the right:
e3 = {{1, 0, 0, 0}, {0, 1, 0, 0}, {0, 0, 1, -144/187}, {0, 0, 0, 1}};
A3 = A2.e3
{{9, 0, 0, 0}, {1, 22/3, 0, 0}, {1, 10/3, 17/3, 0}, {0, 5, 2/3, 6/ 187}}
This gives the lower triangular matrix, which we denote by A3:
\[ {\bf A}_3 = \begin{bmatrix} 9&0&0&0 \\ 1&\frac{22}{3}&0&0 \\ 1&\frac{10}{3} & \frac{17}{3} & 0 \\ 0&5&\frac{2}{3} & \frac{6}{187} \end{bmatrix} . \]
The product of elementary matrices yields the upper triangular matrix:
\[ {\bf E} = \begin{bmatrix} 1&-\frac{2}{3}&\frac{1}{3}&\frac{8}{17} \\ 0&1&-\frac{2}{3} & -\frac{108}{187} \\ 0&0&1&-\frac{144}{187} \\ 0&0&0&1 \end{bmatrix} \qquad \Longrightarrow \qquad {\bf E}^{-1} = \begin{bmatrix} 1&\frac{2}{3}&\frac{1}{9}&0 \\ 0&1&\frac{2}{3}&\frac{12}{11} \\ 0&0&1&\frac{144}{187} \\ 0&0&0&1 \end{bmatrix} . \]
This is actually a LU decomposition, which we check with Mathematica:
EE = e1.e2.e3;
A3.Inverse[EE]
We compare it with the output provided by Mathematica:
{lu, p, c} = LUDecomposition[mat]
l = lu SparseArray[{i_, j_} /; j < i -> 1, {4, 4}] + IdentityMatrix[4]
u = lu SparseArray[{i_, j_} /; j >= i -> 1, {4, 4}]
Then the product l.u restores the original matrix up to one permutation:
l.u
{{1, 8, 5, 8}, {1, 4, 8, 8}, {0, 5, 4, 6}, {9, 6, 1, 0}}
Since the chosen matrix is invertible, the set of columns in it provides a basis for the range of the matrix. Therefore, the range is ℝ4. If we consider another matrix
\[ {\bf B} = \begin{bmatrix} 9&6&1&0 \\ 1&8 &5& 8 \\ 7&5&3&2 \\ 0&5&4&6 \end{bmatrix} , \]
we use exactly the same approach:
B = {{9, 6, 1, 0}, {1, 8, 5, 8}, {7, 5, 3, 2}, {0, 5, 4, 6}};
e1 = {{1, -2/3, -1/9, 0}, {0, 1, 0, 0}, {0, 0, 1, 0}, {0, 0, 0, 1}} ;
B1 = B.e1
{{9, 0, 0, 0}, {1, 22/3, 44/9, 8}, {7, 1/3, 20/9, 2}, {0, 5, 4, 6}}
Next step gives
e2 = {{1, 0, 0, 0}, {0, 1, -2/3, -12/11}, {0, 0, 1, 0}, {0, 0, 0, 1}};
B2 = B1.e2
{{9, 0, 0, 0}, {1, 22/3, 0, 0}, {7, 1/3, 2, 18/11}, {0, 5, 2/3, 6/11}} 11}}
Finally,
e3 = {{1, 0, 0, 0}, {0, 1, 0, 0}, {0, 0, 1, -9/11}, {0, 0, 0, 1}};
B3 = B2.e3
{{9, 0, 0, 0}, {1, 22/3, 0, 0}, {1, 10/3, 17/3, 0}, {0, 5, 2/3, 6/ 187}}
This gives the lower triangular matrix, which we denote by B3:
\[ {\bf B}_3 = \begin{bmatrix} 9&0&0&0 \\ 1&\frac{22}{3}&0&0 \\ 7&\frac{1}{3} &2 & 0 \\ 0&5&\frac{2}{3} &0 \end{bmatrix} . \]
The product of elementary matrices yields the upper triangular matrix:
\[ {\bf E} = \begin{bmatrix} 1&-\frac{2}{3}&\frac{1}{3}&\frac{5}{11} \\ 0&1&-\frac{2}{3} & -\frac{6}{11} \\ 0&0&1&-\frac{9}{11} \\ 0&0&0&1 \end{bmatrix} \qquad \Longrightarrow \qquad {\bf E}^{-1} = \begin{bmatrix} 1&\frac{2}{3}&\frac{1}{9}&0 \\ 0&1&\frac{2}{3}&\frac{12}{11} \\ 0&0&1&\frac{9}{11} \\ 0&0&0&1 \end{bmatrix} . \]
This is actually a LU decomposition, which we check with Mathematica:
EE = e1.e2.e3;
B3.Inverse[EE]
Hence we see the first three columns
\[ \left\{ \begin{bmatrix} 9 \\ 1 \\ 7 \\ 0 \end{bmatrix} , \quad \begin{bmatrix} 0 \\ \frac{22}{3} \\ \frac{1}{3} \\ 5 \end{bmatrix} , \quad \begin{bmatrix} 0 \\ 0 \\ 2 \\ \frac{2}{3} \end{bmatrix} \right\} \]
form the basis for the column space. Actually, the first three columns in the original matrix B can be also chosen as basis vectors because pivots belong to these columns.    
End of Example 3

Our next very important question is how to determine the dimension of the range. It is equal to the number of linearly independent column vectors in matrix A. In other words, we need to determine the column rank of a rectangular matrix. Recall that the dimension of a vector space V is the cardinality (i.e. the number of vectors) of a basis of V over its set of scalars.

A common approach to finding a basis of the column space of a matrix is to reduce it to a simpler form, generally row echelon form, by elementary row operations. Row operations do not change the row space (hence do not change the row rank). Mathematica has no build-in command to determine a row echelon form, but it has RowReduce command to determine the (unique for each matrix) reduced row echelon form or Gauss-Jordan form. From computational point of view, rref is more expensive than just row echelon form (ref for short). However, for theoretical purpose, we will use Gauss--Jordan form. Generally speaking, we cannot utilize rref or ref for column space because elementary row operations may change column vectors. Hence, we need to use elementary column operations to preserve the column space. These operations are equivalent to multiplication from right by elementary matrices. So far, they were not in use because we focus on solving linear system of equations, for which elementary column operations are not suitable.

Example 4: Consider two matrices
\[ {\bf A} = \begin{bmatrix} 2& -1 \\ -4&2 \end{bmatrix} \qquad {\bf B} = \begin{bmatrix} 1& -1/2 \\ 0&0 \end{bmatrix} , \]
then B results from A by the row operation: R2R2 - 2 R1. However, these two matrices have different column spaces because
\[ Column\left({\bf A}\right) = \left\{ \begin{bmatrix} k \\ -2\,k \end{bmatrix} \, : \ k \in \mathbb{R} \right\} , \qquad Column\left({\bf B}\right) = \left\{ \begin{bmatrix} k \\ 0 \end{bmatrix} \, : \ k \in \mathbb{R} \right\} . \qquad \blacksquare \]
End of Example 4

There is nothing special in multiplication by elementary matrices (from left or right---they could be of different dimensions). What is important in these matrices is that they are invertible. Actually, every nonsigular matrix is a product of elementary matrices. It turns out that multiplication by an arbitrary invertible matrix does not change neither row rank nor column rank because to each such operation corresponds an isomorphism or a linear bijection. Recall that an isomorphism between two vector spaces V and U is a one-to-one and onto correspondence; this means that there exists a linear map T : VU such that its inverse T-1 also exists. When V and U are finite dimensional and corresponding ordered bases are established in each space, any linear transformation is equivalent to a multiplication by an invertible square matrix and vice versa. An isomorphism cannot change the dimension of a space, so it exists only between vector spaces of the same dimensions. Therefore, we come to conclusion that the rank is clearly the same for both row rank and column rank, and equals the number of pivots (or basic columns) and also the number of non-zero rows.

Theorem 2: Let A be an m×n matrix, so it can be considered as a linear transformation from ℝn×1 into ℝm×1. If β = { v1, v2, ..., vn } is a basis for ℝn, then its range (the column space) is spanned on vectors Aβ = { Av1, Av2, ..., Avn }.

Clearly, Avi belongs to the column space for each i. Since the range is a subspace of ℝn, it contains
\[ \mbox{span}{\bf A}(\beta ) = \mbox{span}\left\{ {\bf A}{\bf v}_1 , {\bf A}{\bf v}_2 , \ldots , {\bf A}{\bf v}_n \right\} . \]
Now suppose that w belongs to the column space. Then w = Av for some v∈ℝn. Because β is a basis for ℝn, we have
\[ {\bf v} = a_1 {\bf v}_1 + a_2 {\bf v}_2 + \cdots + a_n {\bf v}_n \]
for some scalars a1, a2, ..., an. Since matrix multiplication is a linear operation, we have
\[ {\bf w} = {\bf A}\, {\bf v} = a_1 {\bf A}\,{\bf v}_1 + a_2 {\bf A}\,{\bf v}_2 + \cdots + a_n {\bf A}\,{\bf v}_n \in \mbox{span} \left( {\bf A}\,\beta \right) . \qquad \blacksquare \]

Theorem (Column space basic theorem): When Gaussian elimination procedure is applied to an m×n real matrix A, its pivot columns form a basis for the column space (range) of matrix A.    ▣

Although row reduction is enough to identify pivots, we will use Gauss-Jordan form for simplicity. Let R be the reduced row echelon form of an m×n real matrix A. Suppose that R has r pivots. We extract the corresponding columns containing pivots from matrix A to form the m×r matrix Ar; similarly, we build the m×r matrix from pivot columns of R and denote it by Rr. Obviously, the latter is a full column rank matrix because every column has the only one nonzero entry, which is a "1," situated at different places because they identify the location of pivots. We need to show that matrix Ar has linearly independent columns. Using its partition, we represent Ar as vector of columns:
\[ {\bf A}_r = \left[ {\bf c}_1 \ {\bf c}_2 \ \cdots \ {\bf c}_r \right] . \]
If their linear combination is zero, we have
\[ x_1 {\bf c}_1 + x_2 {\bf c}_2 + \cdots + x_r {\bf c}_r = {\bf A}_r {\bf x} = {\bf 0}_m , \]
where x is a column vector of length r. Since the linear vector equation \( {\bf A}_r {\bf x} = {\bf 0}_m \) has exactly the same set of solutions, which is vector x, as \( {\bf R}_r {\bf x} = {\bf 0}_m , \) we conclude that all entries of x are zeroes. Hence column vectors in matrix Ar are linearly independent, and the column rank is at least r.

To finish the proof, we need to show that if we append any other column to Ar, the resulting matrix will contain linearly dependent column vectors. So we form the linear combination of these vectors and equan it to zero

\[ y_1 {\bf c}_1 + y_2 {\bf c}_2 + \cdots + y_r {\bf c}_r + y_{r+1} {\bf v} = \left[ {\bf A}_r \big\vert {\bf v} \right] {\bf y} = {\bf 0}_m , \]
where \( \left[ {\bf A}_r \big\vert {\bf v} \right] \) is the augmented matrix formed from Ar by appending extra column vector v from A not containing pivot. Applying elementary row operations (that we know do not change the solution set) to this augmented matrix, we reduce it to Gauss-Jordan form:
\[ \left[ {\bf A}_r \big\vert {\bf v} \right] \, \sim \, \left[ {\bf R}_r \big\vert {\bf u} \right] , \]
where u is a column vector in the reduced row echelon form not containing a pivot. This vector u appears in matrix R somewhere to the right of a pivot vector p that has the same number of zeroes. Therefore, vector u is a linear combination of all pivot vectors in matrix R that are situated to the left of u. Since the solution set of two systems of equations
\[ \left[ {\bf A}_r \big\vert {\bf v} \right] {\bf y} = {\bf 0}_m \qquad\mbox{and} \qquad \left[ {\bf R}_r \big\vert {\bf u} \right] {\bf z} = {\bf 0}_m \]
are the same, we conclude that the augmented matrix \( \left[ {\bf A}_r \big\vert {\bf v} \right] \) has linearly dependent columns.

Corollary: The column rank of a matrix is the number of pivots in its row echelon form.    ▣

Example: Consider the linear map \( T \,: \,\mathbb{R}^4 \,\mapsto \, \mathbb{R}^6 \) given by
\[ \begin{split} T(x,y,z,w) &= \left[ 2\,x+6\,y +z-w , \ 3\, x - 17\,y +3\,z -w , \ x +7\,y +5\,z + 3\,w , \right. \\ & \quad \left. 2\,x + 26\,y +3\,z +w , \ x -3\,y +4\,z + 2\,w, \ x +29\,y -z -w \right] . \end{split} \]
We first find the 6×4 matrix A that represents T as a multiplication. The image of T is the subspace of ℝ6 spanned by the columns of matrix A. We will find a basis for the range of T by applying RowReduction to the transposed matrix to A and taking its nonzero rows.
T[x_, y_, z_, w_] := {2 x + 6 y + z - w, 3 x - 17 y + 3 z - w, x + 7 y + 5 z + 3 w, 2 x + 26 y + 3 z + w, x - 3 y + 4 z + 2 w, x + 29 y - z - w}
(A = Transpose[{T[1, 0, 0, 0], T[0, 1, 0, 0], T[0, 0, 1, 0], T[0, 0, 0, 1]}]) // MatrixForm
\( \begin{bmatrix} 2&6&1&-1 \\ 3&-17&3&-1 \\ 1&7&5&3 \\ 2&26&3&1 \\ 1&-3&4&2 \\ 1&29&-1&-1 \end{bmatrix} \)
Now we apply RowReduce command (which is Mathematica's version of rref) to the transposed matrix to obtain the basis of the range, written as row vectors:
(ImT = RowReduce[Transpose[A]]) // MatrixForm
\( \begin{bmatrix} 1&0&0&\frac{5}{3} & -\frac{1}{3}& 2 \\ 0&1&0&-\frac{2}{3}& \frac{1}{3}& -1 \\ 0&0&1&\frac{2}{3}&\frac{2}{3}&0 \\ 0&0&0&0&0&0 \end{bmatrix} \)
From the matrix ImT above, we see that the subspace Im(T) has dimension three, with the first three rows of the matrix ImT as a basis. Thus, any vector in Im(T) can be expressed as a linear combination of the three vectors:
\[ \left\{ \left\langle 1, 0,0, \frac{5}{3} , -\frac{1}{3} , 2 \right\rangle , \ \left\langle 0, 1, 0, -\frac{2}{3} , \frac{1}{3} , -1 \right\rangle , \ \left\langle 0, 0, 1, \frac{2}{3} , \frac{2}{3} , 0 \right\rangle \right\} . \]
Multiplying by 3, we get integer-valued basis for the column space:
\[ \left\{ \left\langle 3, 0,0,5,-1,6 \right\rangle , \ \left\langle 0, 3, 0, -2, 1, -3 \right\rangle , \ \left\langle 0, 0, 3, 2, 2, 0 0 \right\rangle \right\} . \]
So the column space for 6×4 matrix A is a three dimensional subspace of ℝ6.     ■

 

 

Linear Independence of Matrix Columns


Suppose that we are given a matrix equation A x = 0, where matrix A is specified by column vectors: A = [a1, a2, … , an]. Matrix equation can be written as
\[ {\bf A}\in \mathbb{F}^{m,n} , \quad {\bf x} \in \mathbb{F}^{n,1} . \]
Theorem 4: A nonempty set \( S = \{ {\bf v}_1 , \ {\bf v}_2 , \ \ldots , \ {\bf v}_r \} \) of r nonzero vectors in a vector space V is linearly independent if and only if the matrix of the column-vectors from S has rank r.
   
Example 12: Consider the set
\[ S = \left\{ (1,3,-4,2),\ (2,2,-3,5), \ (1,-3,2,-4),\ (-1,3,1,1) \right\} \]
in ℝ4. To determine whether S is linearly independent, we must show that the only linear combination of vectors in S that equals the zero vector is the one in which all the coefficients are zero. Suppose that a1, a2, a3, and a4 are scalars such that
\[ a_1 (1,3,-4,2) + a_2 (2,2,-3,5) + a_3 (1,-3,2,-4) + a_4 (-1,3,1,1) = (0,0,0,0) . \]
Equating the corresponding coordinates of the vectors on the left and the right sides of this system of equations, we obtain the following system of linear equations
\begin{align*} a_1 + 2\,a_2 + a_3 - a_4 &= 0 , \\ 3\,a_1 + 2\,a_2 -3\,a_3 + 3\, a_4 &= 0, \\ -4\,a_1 -3\, a_2 +2\,a_3 + a_4 &= 0, \\ 2\,a_1 + 5\, a_2 -4\, a_3 + a_4 &= 0 . \end{align*}
We build a corresponding matrix
\[ {\bf A} = \begin{bmatrix} \phantom{-}1 & \phantom{-}2 & \phantom{-}1 & -1 \\ \phantom{-}3 & \phantom{-}2& -3& \phantom{-}3 \\ -4& -3& \phantom{-}2& \phantom{-}1 \\ \phantom{-}2 &\phantom{-}5 &-4& \phantom{-}1 \end{bmatrix} \]
Now we ask Mathematica for help:
MatrixRank[{{1, 2, 1, -1}, {3, 2, -3, 3}, {-4, -3, 2, 1}, {2, 5, -4, 1}}]
Since its rank is 4, the only solution to the above system is a1=a2=a3=a4=0, and so S is linearly independent.    ■
End of Example 12
Theorem 5: An n × n real matrix A is invertible if and only if its columns are linearly independent, in which case they form a basis of 𝔽n×1.
   
Example 13:    ■
End of Example 13
Theorem 6: Let A ∈ 𝔽m×n and B ∈ 𝔽m×p. If the column space of matrix A is subspace of the column space of matrix B, \[ \mbox{ColSpace}({\bf A}) \subseteq \mbox{ColSpace}({\bf B}) \qquad \iff \qquad \mathbf{A}\,{\bf X} = {\bf B} \] for some matrix X ∈ 𝔽n×p. Moreover, if the columns of A are linearly independent, then X is unique.
   
Example 14:    ■
End of Example 14