The rank o f a matrix was first defined in 1878 by Georg Frobenius (1849--1917), although he defined using determinants and not as we introduce it. Frobenius was a German mathematician who received his doctorate from and later taught at the University of Berlin. He was best known for his contributions to group theory and differential equations. Frobenius used matrices in his work on group representations.

Rank

Previously in section, we discussed the row space spanned on rows of an m×n matrix A. Its dimension is called the row-rank. Similarly, we worked in section with columns and showed that, for any m×n matrix A, the dimension of its range (which is also called the column space) is called column-rank. Now we show that these two ranks are actually equal to each other in the following theorem.

  Recall that a set of vectors β is said to generate or span a vector space V if every element from V can be represented as a linear combination of vectors from β.

Theorem 1: For any m×n matrix A, its column rank and the row rank are always equal.

Let A be a matrix of size m × n (with m rows and n columns). Let the column rank of A be r and let c1, c2, ... , cr be any basis for the column space of A. Place these as the columns of an m × r matrix C. Every column of A can be expressed as a linear combination of the r columns in C. This means that there is an r × n matrix R such that A = CR. R is the matrix whose i-th column is formed from the coefficients giving the i-th column of A as a linear combination of the r columns of C. Now, each row of A is given by a linear combination of the r rows of R. Therefore, the rows of R form a spanning set of the row space of A and, by the Steinitz exchange lemma, the row rank of A cannot exceed r. This proves that the row rank of A is less than or equal to the column rank of A. This result can be applied to any matrix, so apply the result to the transpose of A. Since the row rank of the transpose of A is the column rank of A and the column rank of the transpose of A is the row rank of A, this establishes the reverse inequality and we obtain the equality of the row rank and the column rank of A. (Also see rank factorization.)

Another proof: Let A be a matrix of size m × n (with m rows and n columns) with entries from ℝ whose row rank is r. Therefore, the dimension of the row space of A is r. Let x1, x2, ... , xr be a basis of the row space of A. We claim that the vectors Ax1, Ax2, ... , Axr are linearly independent. To see why, consider a linear homogeneous relation involving these vectors with scalar coefficients c1, c2, ... , cr:

\[ {\bf 0} = c_1 {\bf A}\,{\bf x}_1 + \cdots c_r {\bf A}\,{\bf x}_r = {\bf A} \left( c_1 {\bf x}_1 + c_2 {\bf x}_2 + \cdots c_r {\bf x}_r \right) = {\bf A}\,{\bf v} , \]
where v = c1x1 + c2x2 + ... + crxr. We make two observations: (a) v is a linear combination of vectors in the row space of A, which implies that v belongs to the row space of A, and (b) since A v = 0, the vector v is orthogonal to every row vector of A and, hence, is orthogonal to every vector in the row space of A. The facts (a) and (b) together imply that v is orthogonal to itself, which proves that v = 0 or, by the definition of v,
\[ {\bf 0} = c_1 {\bf x}_1 + c_2 {\bf x}_2 + \cdots c_r {\bf x}_r . \]
But recall that the xi were chosen as a basis of the row space of A and so are linearly independent. This implies that \( c_1 = c_2 = \cdots = c_r =0 . \) It follows that Ax1, Ax2, ... , Axr are linearly independent.
Now, each Axi is obviously a vector in the column space of A. So Ax1, Ax2, ... , Axr is a set of r linearly independent vectors in the column space of A and, hence, the dimension of the column space of A (i.e., the column rank of A) must be at least as big as r. This proves that row rank of A is no larger than the column rank of A. Now apply this result to the transpose of A to get the reverse inequality and conclude as in the previous proof.

Another proof: If A = 0, then both row space and column space of A are zero dimensional. Suppose A0 and its rank is r. Then its column space is spanned on r column vectors of A:

\[ {\bf c}_1 , {\bf c}_2 , \ldots , {\bf c}_r \]
that comprise a basis for the range of A. We build m×r matrix B of these basis column vectors.

Since each column vector of A is a linear combination of the columns of B, there exits an r×n matrix X such that A = BX. next we partition n×r matrix XT into its column vectors:

\[ {\bf X} = \left[ {\bf u}_1 , {\bf u}_2 , \ldots , {\bf u}_r \right] . \]
This allows us to write
\[ {\bf A}^{\mathrm T} = {\bf X}^{\mathrm T} {\bf B}^{\mathrm T} = {\bf u}_1 {\bf c}_1^{\mathrm T} + {\bf u}_2 {\bf c}_2^{\mathrm T} + \cdots + {\bf u}_r {\bf c}_r^{\mathrm T} . \]
For any m-vector x, we have
\[ {\bf A}^{\mathrm T} {\bf x} = {\bf X}^{\mathrm T} {\bf B}^{\mathrm T} {\bf x} = \left( {\bf c}_1^{\mathrm T} {\bf x} \right) {\bf u}_1 + \left( {\bf c}_2^{\mathrm T} {\bf x} \right) {\bf u}_2 + \cdots + \left( {\bf c}_r^{\mathrm T} {\bf x} \right) {\bf u}_r \in \mbox{span} \left\{ {\bf u}_1 , {\bf u}_2 , \ldots , {\bf u}_r \right\} . \]
Hence, the dimension of row space of A is less or equal to the dimension of its column space. We can repeat the reverse combination and obtain the required identity.
Example 1: The infinite set of monomials \( \left\{ 1, x, x^2 , \ldots , x^n , \ldots \right\} \) form a basis in the set of all polynomials.
End of Example 1
If A is an m×n matrix, then its rank is the maximum number of linearly independent row vectors or column vectors. This integer is denoted as rank(A) or rankA.
In other words, the rank of a matrix is the dimension of its row and column space.

Observation 1: Elementary row operations and elementary column operations on a matrix are rank preserving.

Example 2: The infinite set of monomials \( \left\{ 1, x, x^2 , \ldots , x^n , \ldots \right\} \) form a basis in the set of all polynomials.
End of Example 2

Observation 2: The rank of any matrix equals the maximum number of its linearly independent columns or rows; that is, the rank of a matrix is the dimension of the subspace generated either by its columns or by its rows.

Example 3: The infinite set of monomials \( \left\{ 1, x, x^2 , \ldots , x^n , \ldots \right\} \) form a basis in the set of all polynomials.
End of Example 3

Observation 3: The rank of any matrix equals the number of pivots; that is, the rank of a matrix is the number of leading 1's in its Gauss-Jordan form.

Example 4: The infinite set of monomials \( \left\{ 1, x, x^2 , \ldots , x^n , \ldots \right\} \) form a basis in the set of all polynomials.
End of Example 4

Theorem 2: Let A and B be two matrices of dimensions m×n and n×k, so their product AB exists. Then

\[ \mbox{rank} {\bf AB} \le \min \left\{ \mbox{rank} {\bf A} , \mbox{rank} {\bf B} \right\} . \]
Rank r of any n×k matrix B is the number of its linearly independent rows or columns. The range of any matrix cannot exceed the dimension of its image because r ≤ max{k, n}. So matrix B maps n linearly independent vectors from space ℝn into r linearly independent vectors. Next application of matrix A deals with r linearly independent vectors and therefore it maps into certain number of linearly independent vectors that is less or equal to r.
Example 5: Consider two matrices
\[ {\bf A} = \begin{bmatrix} 1 & -1 \\ -1&1 \end{bmatrix} \qquad \mbox{and} \qquad {\bf B} = \begin{bmatrix} 1 & 1 \\ 1&1 \end{bmatrix} \]
of rank 1. Their product
\[ {\bf A}\, {\bf B}= \begin{bmatrix} 1 & -1 \\ -1&1 \end{bmatrix} \, \begin{bmatrix} 1 & 1 \\ 1&1 \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 0&0 \end{bmatrix} \]
has rank zero. However, ranks of each of these matrices are ones.
End of Example 5

 

Theorem 3: Let A be an m×n matrix. If p×m matrix S has full column rank and n×q matrix T has full row rank, then

  1. rank(AT) = rank(A);
  2. rank(SA) = rank(A);
  3. rank(SAT) = rank(A).
All these statements follow from one observation that an invertible matrix multiplication is an isomorphism and as so it does not change the dimension of a space it acts on. Recall that an isomorphism maps linearly independent sets into linearly independent sets.
Example 6: Consider the matrices
\[ {\bf A} = \begin{bmatrix} 2&-3&8&9 \\ 6&7&-8&11 \\ -1&5&-11&-8 \end{bmatrix} , \qquad {\bf S} = \begin{bmatrix} 2&3&5 \\ -4&2&1 \\ -2&5&8 \\ 8&4&7 \\ 14&5&9 \end{bmatrix} , \qquad {\bf T} = \begin{bmatrix} 2&6&1&-1 \\ 3&-1&3&-1 \\ 1&7&5&3 \\ 2&26&3&1 \end{bmatrix} . \]
A = {{2,-3,8,9},{6,7,-8,11},{-1,5,-11,-8}} MatrixRank[A] S = {{2, 3, 5}, {-4, 2, 1}, {-2, 5, 8}, {8, 4, 7}, {14, 5, 9}} MatrixRank[S] T = {{2, 6, 1, -1}, {3, -1, 3, -1}, {1, 7, 5, 3}, {2, 26, 3, 1}} MatrixRank[T]
As Mathematica confirms, 3×4 matrix A has rank 2, 5×3 matrix S has rank 3, and 4×4 matrix T has rank 4. We slightly modify matrices S and T
\[ {\bf S}_1 = \begin{bmatrix} 2&3&5 \\ -4&2&3 \\ -2&5&8 \\ 8&4&7 \\ 14&5&9 \end{bmatrix} , \qquad {\bf T}_1 = \begin{bmatrix} 2&6&1&-1 \\ 3&-17&3&-1 \\ 1&7&5&3 \\ 2&26&3&1 \end{bmatrix} . \]
S1 = {{2, 3, 5}, {-4, 2, 3}, {-2, 5, 8}, {8, 4, 7}, {14, 5, 9}} MatrixRank[S1]
2
T1 = {{2, 6, 1, -1}, {3, -17, 3, -1}, {1, 7, 5, 3}, {2, 26, 3, 1}} MatrixRank[T1]
3
so new matrices are not full row rank and full column rank, respectively. We calculate their products and check corresponding ranks:
\[ {\bf S} \,{\bf A} = \begin{bmatrix} 17&40&-63&11 \\ 3&31&-59&-22 \\ 18&81&-144&-27 \\ 33&39&-45&60 \\ 49&38&-27&109 \end{bmatrix} , \qquad {\bf A} \,{\bf T} = \begin{bmatrix} 21&305&60&34 \\ 47&259&20&-26 \\ -14&-296 &-65&-45 \end{bmatrix} . \]
S.A
{{17, 40, -63, 11}, {3, 31, -59, -22}, {18, 81, -144, -27}, {33, 39, -45, 60}, {49, 38, -27, 109}}
MatrixRank[%]
2
A.T
{{21, 305, 60, 34}, {47, 259, 20, -26}, {-14, -296, -65, -45}}
MatrixRank[%]
2
On the other hand, products of matrix A with S1 and T1 give
MatrixRank[S1.A]
2
MatrixRank[A.T1]
2
End of Example 6
We preset some theoretical results that fasilitate proving important observations about ranks of matrices.

Theorem 4: Let A be an m×n matrix of rank r. By means of a finite number of elementary row and column operations A can be transformed into the "diagonal" matrix

\[ {\bf \Lambda} = \begin{bmatrix} {\bf I}_{r\times r} & {\bf 0}_{r\times (n-r)} \\ {\bf 0}_{(m-r)\times r} & {\bf 0}_{(m-r)\times (n-r)} \end{bmatrix} , \]
where I is the identity matrix of dimension r and three matrices 0 are rectangular matrices with all zero entries.
We suppose that A0 and r = rankA > 0. The proof is by mathematical induction on m, the number of rows of A.

Suppose that m = 1. By means of interchanging any two rows or columns and multiplying columns by a nonzero number, matrix A can be transformed into a matrix with a 1 in the (1,1) position. By means of at most n-1 elementary operations (adding rows or columns with some multiples) this matrix can in turn be transformed into the matrix

\[ \begin{pmatrix} 1 & 0 & \cdots & 0 \end{pmatrix} . \]
Note that there is one linearly independent column in Λ. So its rank is equal to rank of A, that is, 1.

Next assume that the theorem hlds for any matrix with at most m-1 rows (for some m>1). We must prove that the theorem holds for any matrix with m rows.

Example 7: The infinite set of monomials \( \left\{ 1, x, x^2 , \ldots , x^n , \ldots \right\} \) form a basis in the set of all polynomials.
End of Example 7

Corollary 1: Let A be an m×n matrix of rank r. Then there exist invertible matrices X and Y of sizes m×m and n×n, respectively, such that Λ = XAY, where the "diagonal" matrix Λ is described in the previous theorem.

Example 8: Let us consider the matrix
\[ {\bf A} = \begin{bmatrix} 1&3&4&-1 \\ 2&-2&0&6 \\ 1&5&6&-3 \end{bmatrix} . \]
Mathematica finds its rank without a problem.
A = {{1, 3, 4, -1}, {2, -2, 0, 6}, {1, 5, 6, -3}}
MatrixRank[A]
2
First, we multiply A from left by the elementary matrix
\[ {\bf E}_1 {\bf A} = \begin{bmatrix} 1&0&0 \\ -2&1&0 \\ -1&0&1 \end{bmatrix} \begin{bmatrix} 1&3&4&-1 \\ 2&-2&0&6 \\ 1&5&6&-3 \end{bmatrix} = \begin{bmatrix} 1&3&4&-1 \\ 0&-8&-8&8 \\ 0&2&2&-2 \end{bmatrix} , \]
E1 = {{1, 0, 0}, {-2, 1, 0}, {-1, 0, 1}}
(E1.A ) // MatrixForm
and then from right
\[ {\bf E}_1 {\bf A}\, {\bf G}_1 = \begin{bmatrix} 1&3&4&-1 \\ 0&-8&-8&8 \\ 0&2&2&-2 \end{bmatrix} \begin{bmatrix} 1&-3&-4&1 \\ 0&1&0&0 \\ 0&0&1&0 \\ 0&0&0&1 \end{bmatrix} = \begin{bmatrix} 1&0&0&0 \\ 0&-8&-8&8 \\ 0&2&2&-2 \end{bmatrix} . \]
G1 = {{1, -3, -4, 1}, {0, 1, 0, 0}, {0, 0, 1, 0}, {0, 0, 0, 1}}
(E1.A .G1) // MatrixForm
Now we multiply from left by two elementary matrices:
\[ {\bf E}_2 = \begin{bmatrix} 1&0&0 \\ 0&-1/8&0 \\ 0&0&1 \end{bmatrix} \qquad \mbox{and then} \qquad {\bf E}_3 = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&-2&1 \end{bmatrix} \]
E2 = {{1, 0, 0}, {0, -1/8, 0}, {0, 0, 1}}
E3 = {{1, 0, 0}, {0, 1, 0}, {0, -2, 1}}
\[ {\bf E}_3 {\bf E}_2 {\bf E}_1 {\bf A}\, {\bf G}_1 = \begin{bmatrix} 1&0&0&0 \\ 0&1&1&-1 \\ 0&0&0&0 \end{bmatrix} . \]
Finally, we multiply from right by
\[ {\bf G}_2 = \begin{bmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0&0&1&1 \\ 0&0&1&1 \end{bmatrix} \]
to obtain
\[ {\bf E}_3 {\bf E}_2 {\bf E}_1 {\bf A}\, {\bf G}_1 {\bf G}_2 = \begin{bmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0&0&0&0 \end{bmatrix} . \]
G2 = {{1, 0, 0, 0}, {0, 1, 0, 0}, {0, 0, 1, 1}, {0, 0, 1, 1}}
So rank of matrix A is 2.
End of Example 8

Corollary 2: Let A be an m×n matrix. Then rank(A) = rank(AT).

We have \begin{align*} \mbox{rank}\left({\bf A}^{\mathrm T} \right) &= \dim \left( \mbox{col} \left( {\bf A}^{\mathrm T} \right)\right) \\ &- \dim \left( \mbox{row} \left( {\bf A}^{\mathrm T} \right)\right) \\ &= \mbox{rank}\left({\bf A} \right) . \end{align*}
Example 9: The infinite set of monomials \( \left\{ 1, x, x^2 , \ldots , x^n , \ldots \right\} \) form a basis in the set of all polynomials.
End of Example 9

Corollary 3: Every invertible matrix is a product of elementary matrices.

If A is an invertible n×n matrix, then rank(A) = n. Hence, by Corollary, there exist invertible matrices X and Y such that Λ = XAY, where Λ is the identity matrix of size n, that is Λ = I.

As in the previous proof, not that X = Ep Ep-1 ... E1 and Y = G1 G2 ... Gq, where Ei's and Gj's are elementary matrices. Hence,

\[ {\bf A} = {\bf E}_1^{-1} {\bf E}_2^{-1} \cdots {\bf E}_p^{-1} {\bf G}_q^{-1} {\bf G}_{q-1}^{-1} \cdots {\bf G}_1^{-1} . \]
But the inverses of elementary matrices are elementary matrices, and so A is the product of elementary matrices.

 

Example 10: The set of monomials \( \left\{ 1, x, x^2 , \ldots , x^n \right\} \) form a basis in the set of all polynomials of degree up to n. It has dimension n+1.
End of Example 10
Recall that the dimension of the kernel is called nullity.

Theorem 5 (The Rank Theorem): If A is an m X n matrix, then \[ \mbox{rank}({\bf A}) + \mbox{nullity}({\bf A}) = n . \]

Let R be the reduced row echelon form of A, and suppose that rank(A) = r. Then R has r leading ls, so there are r leading variables and nr free variables in the solution to A x = 0. Since dim(ker(A)) = nr, we have \[ \mbox{rank}({\bf A}) + \mbox{nullity}({\bf A}) = r + (n-r) = =n . \]
Often, when we need to know the kernel of a matrix, we do not need to know the actual solution of A x = 0. The Rank Theorem is extremely useful in such situations, as the following example illustrates.
Example 11: Let us consider two matrices \[ {\bf A} = \begin{bmatrix} 1 & 2 & -3 \\ 3 & 2 & -1 \\ 4 & 5 & \phantom{1}6 \\ 6 & 5 & \phantom{-}4 \end{bmatrix} , \qquad {\bf B} = \begin{bmatrix} -1 & -2 & 1 & 2 \\ 4 & 4 & -3 & 1 \\ 8 & 1 & 7 & 2 \end{bmatrix} . \] Mathematica helps to determine the rank of matrix A:
A = {{1, 2, -3}, {3, 2, -1}, {4, 5, 6}, {6, 5, 4}};
MatrixRank[A]
3
Thus, by the Rank Theorem, kernel(A) = 3 − nullity(A) = 3 − 3 = 0.

For matrix B, Mathematica tells us that its rank is 2.
B = {{-1, -2, 1, 2}, {1, -3, 4, 4}, {8, 1, 7, 2}} MatrixRank[B]
2
Hence the dimension of the kernel for matrix B is 2 and by the rank theorem, we have nullity(B) = 4 − rank(B) = 4 − 2 = 2. We can verify this answer with row reduction procedure:
RowReduce[B]
{{1, 0, -(5/4), -(5/4)}, {0, 1, 1/2, 3/2}, {0, 0, 0, 0}}
\[ {\bf B} = \begin{bmatrix} -1 & -2 & 1 & 2 \\ 4 & 4 & -3 & 1 \\ 8 & 1 & 7 & 2 \end{bmatrix} \, \sim \, \begin{bmatrix} 1 & 0 & - \frac{5}{4} & -\frac{5}{4} \\ 0 & 1 & \frac{1}{2} & \frac{3}{2} \\ 0&0&0&0 \end{bmatrix} . \]
End of Example 11

Theorem 6: Let A be an m × n matrix over field 𝔽, and let TA be a corresponding linear transformation from 𝔽m into 𝔽n.

  1. TA : 𝔽m ⇾ 𝔽n is injective if and only if rank(A) = n.
  2. TA : 𝔽m ⇾ 𝔽n is surjective if and only if rank(A) = m.
If A ∈ 𝔽m×n, then since the image of TA is the column space of A, we have \[ \dim\left( \ker \left( T_A \right) \right) + \mbox{rank} \left( {\bf A} \right) = \din \left( \mathbb{F}^n \right) = n . \]