Matrix spaces

Range or column space

Null space

Dimension Theorems

Four subspaces

Row Space

With every m × n matrix A, we can associate a vector space that is generated by its rows.
The Row Space of a m × n matrix A is the span (set of all possible linear combinations) of its row vectors. So it is a subspace of ℝn in case of real entries or ℂn when matrix A has complex entries.

  Recall that a set of vectors β is said to generate or span a vector space V if every element from V can be represented as a linear combination of vectors from β.

Lemma: The nonzero rows of a matrix in reduced row echelon form (Gauus--Jordan form) are linearly independent.

Let A be an m×n matrix in the Gauss-Jordan form, and let r1, r2, ... , rk be its nonzero rows. Suppose that there exist scalars such that
\[ a_1 {\bf r}_1 + a_2 {\bf r}_2 + \cdots + a_k {\bf r}_k = \left[ 0, 0, \ldots , 0 \right] = {\bf 0}. \]
Since A is in its row echelon form, all entries below the leading entry of r1 are zeroes. Hence, a1 = 0 because a1r1 would otherwise contribute a nonzero entry to the zero vector 0. A similar argument shows that the coefficient a2 must be zero, and so on. Hence all coefficients are zeroes, and vectors r1, ... , rk must be linearly independent.

Theorem: The nonzero rows of the reduced row echelon form R of an m×n matrix A are a basis for the row space of matrix A.

Suppose that A is an m×n matrix and C is obtained by an elementary row operations. We are going to show that both matrices have the same row spaces.

Let r1, r2, ... , rk be its nonzero rows of A, and suppose C is obtained from A by an elementary row operation (Ri ⇾ s Ri). Then each vector x from row space of C is of the form

\begin{align*} {\bf x} &= a_1 {\bf r}_1 + \cdots + a_i \left( s\,{\bf r}_i \right) + \cdots + a_n {\bf r}_n \\ &= a_1 {\bf r}_1 + \cdots + \left( a_i s \right) {\bf r}_i + \cdots + a_n {\bf r}_n \end{align*}
and is therefore also in the row space of A. Conversely, since s ≠ 0, each vector y from row space of A is of the form
\[ {\bf y} = a_1 {\bf r}_1 + \cdots + \left( \frac{a_i}{s} \right) \left( s\,{\bf r}_i \right) + \cdots + a_n {\bf r}_n \]
and is also in the row space of C. The calculations for the other two types are analogous.

Hence both matrices A and C have the same row space. Since the reduced row echelon matrix R results from A by a finite number of applications of elementary row operations, we conclude the proof.

Corollary: Row-equivalent matrices have the same row spaces.

Two matrices are row-equivalent if one can be obtained from another by a sequence of (possibly many) row operations. We will prove the theorem for two matrices that differ by a single row operation, and then this result can be applied repeatedly to get the full statement of the theorem. The row spaces of A and B are spans of the columns of their transposes. For each row operation we perform on a matrix, we can define an analogous operation on the columns. Perhaps we should call these column operations. Instead, we will still call them row operations, but we will apply them to the columns of the transposes

Refer to the columns of AT and BT as Ai and Bi, 1≤i≤m. The row operation that switches rows will just switch columns of the transposed matrices. This will have no effect on the possible linear combinations formed by the columns.

Theorem: For any m×n matrix A, its column rank and the row rank are always equal.

Let A be a matrix of size m × n (with m rows and n columns). Let the column rank of A be r and let c1, c2, ... , cr be any basis for the column space of A. Place these as the columns of an m × r matrix C. Every column of A can be expressed as a linear combination of the r columns in C. This means that there is an r × n matrix R such that A = CR. R is the matrix whose i-th column is formed from the coefficients giving the i-th column of A as a linear combination of the r columns of C. Now, each row of A is given by a linear combination of the r rows of R. Therefore, the rows of R form a spanning set of the row space of A and, by the Steinitz exchange lemma, the row rank of A cannot exceed r. This proves that the row rank of A is less than or equal to the column rank of A. This result can be applied to any matrix, so apply the result to the transpose of A. Since the row rank of the transpose of A is the column rank of A and the column rank of the transpose of A is the row rank of A, this establishes the reverse inequality and we obtain the equality of the row rank and the column rank of A. (Also see rank factorization.)

Another proof: Let A be a matrix of size m × n (with m rows and n columns) with entries from ℝ whose row rank is r. Therefore, the dimension of the row space of A is r. Let x1, x2, ... , xr be a basis of the row space of A. We claim that the vectors Ax1, Ax2, ... , Axr are linearly independent. To see why, consider a linear homogeneous relation involving these vectors with scalar coefficients c1, c2, ... , cr:

\[ {\bf 0} = c_1 {\bf A}\,{\bf x}_1 + \cdots c_r {\bf A}\,{\bf x}_r = {\bf A} \left( c_1 {\bf x}_1 + c_2 {\bf x}_2 + \cdots c_r {\bf x}_r \right) = {\bf A}\,{\bf v} , \]
where v = c1x1 + c2x2 + ... + crxr. We make two observations: (a) v is a linear combination of vectors in the row space of A, which implies that v belongs to the row space of A, and (b) since A v = 0, the vector v is orthogonal to every row vector of A and, hence, is orthogonal to every vector in the row space of A. The facts (a) and (b) together imply that v is orthogonal to itself, which proves that v = 0 or, by the definition of v,
\[ {\bf 0} = c_1 {\bf x}_1 + c_2 {\bf x}_2 + \cdots c_r {\bf x}_r . \]
But recall that the xi were chosen as a basis of the row space of A and so are linearly independent. This implies that \( c_1 = c_2 = \cdots = c_r =0 . \) It follows that Ax1, Ax2, ... , Axr are linearly independent.
Now, each Axi is obviously a vector in the column space of A. So Ax1, Ax2, ... , Axr is a set of r linearly independent vectors in the column space of A and, hence, the dimension of the column space of A (i.e., the column rank of A) must be at least as big as r. This proves that row rank of A is no larger than the column rank of A. Now apply this result to the transpose of A to get the reverse inequality and conclude as in the previous proof.
Example: The span of the empty set \( \varnothing \) consists of a unique element 0. Therefore, \( \varnothing \) is linearly independent and it is a basis for the trivial vector space consisting of the unique element---zero. Its dimension is zero.

 

Example: In \( \mathbb{R}^n , \) the vectors \( e_1 [1,0,0,\ldots , 0] , \quad e_2 =[0,1,0,\ldots , 0], \quad \ldots , e_n =[0,0,\ldots , 0,1] \) form a basis for n-dimensional real space, and it is called the standard basis. Its dimension is n.

 

Example: Let us consider the set of all real \( m \times n \) matrices, and let \( {\bf M}_{i,j} \) denote the matrix whose only nonzero entry is a 1 in the i-th row and j-th column. Then the set \( {\bf M}_{i,j} \ : \ 1 \le i \le m , \ 1 \le j \le n \) is a basis for the set of all such real matrices. Its dimension is mn.

 

Example: The set of monomials \( \left\{ 1, x, x^2 , \ldots , x^n \right\} \) form a basis in the set of all polynomials of degree up to n. It has dimension n+1. ■

 

Example: The infinite set of monomials \( \left\{ 1, x, x^2 , \ldots , x^n , \ldots \right\} \) form a basis in the set of all polynomials. ■

 

Theorem: Let V be a vector space and \( \beta = \left\{ {\bf u}_1 , {\bf u}_2 , \ldots , {\bf u}_n \right\} \) be a subset of V. Then β is a basis for V if and only if each vector v in V can be uniquely decomposed into a linear combination of vectors in β, that is, can be uniquely expressed in the form

\[ {\bf v} = \alpha_1 {\bf u}_1 + \alpha_2 {\bf u}_2 + \cdots + \alpha_n {\bf u}_n \]
for unique scalars \( \alpha_1 , \alpha_2 , \ldots , \alpha_n . \)