Matrix spaces

Row space

Range or column space

Dimension Theorems

Four subspaces

Kernels or Null Spaces

Throughout this section, we consider an m-by-n matrices as transformations from n-dimensional Euclidean vector space \( \mathbb{R}^n \) into another space \( \mathbb{R}^m . \) Any linear transformation \( T:\,U \to V \) between two finite dimensional vector spaces can be represented by a matrix when appropriate ordered bases in U and V are choisen. Most because of this dual role of matrices, the following definition introduces two terms for the same object: kernel is usually used in the theory of transformations and functional analysis, and nullspace in matrix theory.

Let A be an \( m \times n \) matrix. The set of all (column) vectors x of length n that satisfy the linear equation \( {\bf A}\,{\bf x} = {\bf 0} , \) where 0 is the m-dimensional column vector of zeroes, forms a subset of \( \mathbb{R}^n . \) This subset is nonempty because it clearly contains the zero vector: x = 0 always satisfies \( {\bf A}\,{\bf x} = {\bf 0} . \) This subset actually forms a subspace of \( \mathbb{R}^n , \) called the kernel (or nullspace) of the matrix A and denoted ker(A).

Example 1:
End of Example 1

  Let's suppose that the matrix A represents a physical system. As an example, let's assume our system is a rocket, and A is a matrix representing the directions we can go based on our thrusters. Let's suppose that we have three thrusters equally spaced around our rocket. If they're all perfectly functional then we can move in any direction. But what happens when a thruster breaks? Now we've only got two thrusters. The null space are the set of thruster intructions that completely waste fuel. They're the set of instructions where our thrusters will thrust, but the direction will not be changed at all.

Another example: Perhaps A can represent a rate of return on investments. The range are all the rates of return that are achievable. The null space are all the investments that can be made that wouldn't change the rate of return at all.

Another example: room illumination. The range of A represents the area of the room that can be illuminated. The null space of A represents the power we can apply to lamps that don't change the illumination in the room at all.

Let V and U be vector spaces over the same field and T : UV be a linear transformation. Then the set of all vectors that are mapped into the zero vector is called the kernel of transformation T.

Theorem 1: The kernel of a linear transformation T : UV is a subspace of U.

Let u and v be arbitrary elements from the kernel of T, then T(u) = 0 and T(v) = 0, where 0 is zero vector from V. Since T is a linear transformation, we get
\[ T \left( {\bf u} + {\bf v} \right) = T \left( {\bf u} \right) + T \left( {\bf v} \right) = {\bf 0} + {\bf 0} = {\bf 0} , \]
so their sum is also in the kernel. Now let k be a scalar, then
\[ T \left( k\,{\bf u} \right) = k\, T \left( {\bf u} \right) = k\, {\bf 0} = {\bf 0} . \]
Hence all conditions for being a vector space are fulfilled for the kernel.
Example 2:
End of Example 2

Observation: Elementary row operations do not change the null space of a matrix.

Example 3:
End of Example 3

The dimension of the kernel (null space) of a matrix A is called the nullity of A and is denoted by nullity(A) = n - r, where r is rank of matrix A.

The nullity of a matrix was defined in 1884 by Jaseph Sylvester (1814--1887), who was interested in variants---properties of matrices that do not change under certain types of transformation. Born in Jewish family, Joseph became the second president of the London Mathematical Society (England). In 1878, while teaching at Johns Hopkings University in Baltimore (USA), he founded the American Journal of Mathematics, the first mathematical journal in the United States.

Theorem 2: Nullity of a matrix A is the number of free variables in its reduced row echelon (Gauss--Jordan) form.

Example 4: The set of solutions of the homogeneous system
\[ {\bf A} \, {\bf x} = {\bf 0} \qquad \mbox{or} \qquad \begin{bmatrix} 1&2&3 \\ 4&5&6 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \]
forms a subspace of \( \mathbb{R}^3 . \) To determine this subspace, we first use a row‐reduction (elimination part in Gaussian procedure):
\[ \begin{bmatrix} 1&2&3 \\ 4&5&6 \end{bmatrix} \,\sim \, \begin{bmatrix} 1&2&3 \\ 0&-3&-6 \end{bmatrix} . \]
Therefore, the system is equivalent to
\[ \begin{bmatrix} 1&2&3 \\ 0&-3&-6 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \qquad \Longleftrightarrow \qquad \begin{split} x_1 + 2\, x_2 + 3\,x_3 &=0 , \\ -3\,x_2 -6\,x_3 &^=0 . \end{split} \]
If you let x3 be free variables, the second equation directly implies
\[ x_2 = -2\,x_3 . \]
Substituting this result into the other equation determines x2:
\[ x_1 = -2\,x_2 -3\,x_3 = 4\,x_3 -3\, x_3 = x_3 . \]
So the set of solutions of the given homogeneous system can be written as
\[ \begin{bmatrix} x_3 \\ -2\,x_3 \\ x_3 \end{bmatrix} = x_3 \begin{bmatrix} 1 \\ -2 \\ 1 \end{bmatrix} , \qquad x_3 \in \mathbb{R} , \]
which is a subspace of \( \mathbb{R}^3 , \) spanned on the vector \( [ 1, -2, 1 ]^{\mathrm T} . \) We check with Mathematica

NullSpace[{{1,2,3},{4,5,6}}]
Out[1]= {{1, -2, 1}}

Theorem 3: Let T: UV be a linear transformation from n dimensional vector space U into m dimensional vector space V. If { v1, v2, ... , vk } is a basis for ker(T), then there exist vectors vk+1, vk+2, ... , vn so that { v1, v2, ... , vn } is a basis for U and { T(vk+1), T(vk+2), ... , T(vn) } is a basis for the range of T.

First, we recall that a basis for a subspace can be extended to a basis for the entire space. Here the subspace of U is ker(T) so there exist vectors vk+1, vk+2, ... , vn so that { v1, v2, ... , vn } is a basis for U. next, if v is in U, then it can be expressed in terms of the basis { v1, v2, ... , vn }; we have
\[ {\bf v} = \sum_{j=1}^n c_j {\bf v}_j . \]
It follows that
\[ T({\bf v}) = T \left( \sum_{j=1}^n c_j {\bf v}_j \right) = \sum_{j=1}^n c_j T({\bf v}_j ) = \sum_{j=k+1}^n c_j T({\bf v}_j ) \]
since T is a linear transformation and T(v1) = T(v2) = ... = T(vk) = 0V. Thus the image of any vector v from U is in span{ T(vk+1), T(vk+2) , ... , T(vn) } and hence T(vk+1), T(vk+2) , ... , T(vn) spans the image of T. It remains to show that { T(vk+1), T(vk+2) , ... , T(vn) } is linearly independent. We proceed as follows.
Suppose that
\[ \sum_{j=k+1}^n b_j T \left( {\bf v}_j \right) = {\bf 0}_{V} . \]
Since T is a linear transformation, we have
\[ {\bf 0}_{V} = \sum_{j=k+1}^n b_j T \left( {\bf v}_j \right) = T \left( \sum_{j=k+1}^n b_j {\bf v}_j \right) , \]
which says that \( \sum_{j=k+1}^n b_j {\bf v}_j \) is in ker(T); hence this linear combination of vectors must be the sero vector. Since { vk+1, vk+2 , ... , vn } is a linearly independent set, the only way
\[ \sum_{j=k+1}^n b_j {\bf v}_j = {\bf 0}_{U} \]
is when all the coefficients are zero. That is, bk+1 = bk+2 = ... = bn = 0. Hence { T(vk+1), T(vk+2) , ... , T(vn) } is a linearly independent set. Since these vectors both span the range of T and is linearly independent, it is a basis.
Example 5: Define \( T:\,\mathbb{R}^3 \to \mathbb{R}^2 \) by
\[ T(a_1 , a_2 , a_3 ) = (2\,a_1 -a_2 , 3\, a_3) . \]
To this linear transformation corresponds 2-y-3 matrix \( {\bf A} = \begin{bmatrix} 2&-1&0 \\ 0&0&3 \end{bmatrix} . \) Its kernel consists of vectors of the form [a, 2a, 0].
End of Example 9

 

 

Example: Consider two square matrices
\[ {\bf A} = \begin{bmatrix} 1&2 \\ 3&4 \end{bmatrix} \qquad\mbox{and} \qquad {\bf B} = \begin{bmatrix} 1&2 \\ -2&-4 \end{bmatrix} , \]
By definition, the nullspace of A consists of all vectors x such that \( {\bf A} \, {\bf x} = {\bf 0} . \) We perform the following elementary row operations on A and B:
\[ {\bf A} \,\sim \, {\bf R}_A = \begin{bmatrix} 1&2 \\ 0&-2 \end{bmatrix} \qquad\mbox{and} \qquad {\bf B} \,\sim \, {\bf R}_B = \begin{bmatrix} 1&2 \\ 0&0 \end{bmatrix} \]
to conclude that \( {\bf A} \, {\bf x} = {\bf 0} \) and \( {\bf B} \, {\bf x} = {\bf 0} \) are equivalent to the simpler systems
\[ \begin{bmatrix} 1&2 \\ 0&-2 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \qquad\mbox{and} \qquad \begin{bmatrix} 1&2 \\ 0&0 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} . \]
For matrix A, the second row implies that \( x_2 =0 , \) and back substituting this into the first row implies that \( x_1 =0 . \) Since the only solution of A x = 0 is x = 0, the kernel of A consists of the zero vector alone. This subspace, { 0 }, is called the trivial subspace (of \( \mathbb{R}^2 \) ).

For matrix B, we have the only one equation

\[ x_1 + 2\,x_2 =0 \qquad \Longrightarrow \qquad x_1 = -2\, x_2 . \]
Substitution back yields a one-dimensional null space spanned on the vector
\[ \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = x_2 \begin{bmatrix} -2 \\ 1 \end{bmatrix} , \qquad x_2 \in \mathbb{R} . \]

 

Example: Let us consider \( 4 \times 3 \) matrix:
\[ {\bf A} = \begin{bmatrix} 1&2&5 \\ 3&-1&2 \\ -1&4&1 \\ 2&3&-2 \end{bmatrix} \]
of rank 3:

A = {{1, 2, 5}, {3, -1, 2}, {-1, 4, 1}, {2, 3, -2}}
MatrixRank[A]
Out[2]= 3

A // TraditionalForm
Out[3]= \( \displaystyle \quad \begin{pmatrix} 1& 2& 5 \\ 3& -1& 2 \\ -1& 4& 1 \\ 2& 3& -2 \end{pmatrix} \)
Its Gauss--Jordan form is
R = RowReduce[A]
Out[3]= {{1, 0, 0}, {0, 1, 0}, {0, 0, 1}, {0, 0, 0}}
We rewrite the system of equations for the kernel in vector form:
\[ \begin{pmatrix} 1& 0& 0 \\ 0& 1& 0 \\ 0&0&1 \\ 0&0&0 \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \end{pmatrix} , \]
from which follows that the kernel consists of one zero vector (trivial subspace of \( \mathbb{R}^3 \) ).

 

Example: Consider the matrix of rank 3:
\[ {\bf A} = \begin{bmatrix} 1&2&3&6&0 \\ 2&1&2&7&2 \\ 4&-1&5&19&11 \\ 5&-2&-3&6&6 \end{bmatrix} . \]

A = {{1, 2, 3, 6, 0}, {2, 1, 2, 7, 2}, {4, -1, 5, 19, 11}, {5, -2, -3, 6, 6}}
MatrixRank[A]
Out[2]= 3
We find its Gauss--Jordan with Mathematica:
R = RowReduce[A]
Out[3]= {{1, 0, 0, 2, 1}, {0, 1, 0, -1, -2}, {0, 0, 1, 2, 1}, {0, 0, 0, 0, 0}}
\[ {\bf A} \, \sim \, {\bf R} = \begin{bmatrix} 1&0&0&2&1 \\ 0&1&0&-1&-2 \\ 0&0&1&2&1 \\ 0&0&0&0&0 \end{bmatrix} . \]
So we see that three first variables are leading variables and the last two are the free variables. To find the kernel, we need to solve the following system of algebraic equations
\[ \begin{split} x_1 + 2\,x_4 + x_5 &=0 , \\ x_2 -x_4 - 2\, x_5 &=0, \\ x_3 + 2\,x_4 + x_5 &= 0 . \end{split} \]
We rewrite this system in vector form:
\[ \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} + \begin{bmatrix} 2&1 \\ -1&-2 \\ 2&1 \end{bmatrix} \begin{bmatrix} x_4 \\ x_5 \end{bmatrix} \qquad\mbox{or} \qquad {\bf x} =- {\bf F} \,{\bf u} , \]
where \( {\bf x} = [ x_1 , x_2 , x_3 ]^{\mathrm T} , \) \( {\bf u} = [ x_4 , x_5 ]^{\mathrm T} , \) and F is the 3-by-2 matrix defined above.

F = Take[R, {1, 3}, {4, 5}]
Out[4]= {{2, 1}, {-1, -2}, {2, 1}}
Setting \( {\bf u} = [ 1 , 0 ]^{\mathrm T} , \) one time, and \( {\bf u} = [ 0 , 1 ]^{\mathrm T} , \) we obtain two linearly independent vectors
\[ {\bf v}_1 = \begin{bmatrix} -2 \\ 1 \\ -2 \\ 1 \\ 0 \end{bmatrix} \qquad \mbox{and} \qquad {\bf v}_2 = \begin{bmatrix} -1 \\ 2 \\ -1 \\ 0 \\ 1 \end{bmatrix} \]
that form the basis for the kernel. We check with Mathematica:

A.{{-2}, {1}, {-2}, {1}, {0}}
A.{{-1}, {2}, {-1}, {0}, {1}}
Out[6]= {{0}, {0}, {0}, {0}}
Since these two vectors v1 and v2 are linearly independent (having zeroes in different components), they form a basis for the null space of matrix A. ■

 

Theorem: Suppose that m-by-n matrix A of rank r, when reduced to row echelon form (without row excjange), has the first r rows or columns as pivots, so it is reduced to the upper triangular form

\[ {\bf A} \,\sim\, {\bf R} = \begin{bmatrix} {\bf I}_r & {\bf F}_{r \times (n-r)} \\ {\bf 0}_{(m-r)\times r} &{\bf 0}_{(m-r)\times (n-r)} \end{bmatrix} . \]
Here Ir is the identity square matrix of size r, \( {\bf F}_{r \times (n-r)} \) is the \( r \times (n-r) \) matrix, and 0 are zero matrices. Then the kernel of the matrix A is spanned on column vectors of the matrix
\[ \mbox{ker}({\bf A}) = \mbox{span} \begin{bmatrix} -{\bf F}_{r \times (n-r)} \\ {\bf I}_{(m-r)\times (n-r)} \end{bmatrix} . \]

Theorem: Let V and U be vector spaces and \( T:\,U \to V \) be a linear transformation. Then T is one-to-one if and only if its kernel is zero: ker(T) = {0U} or, which is equivalent, if and only if the dimension of the kernel is zero.    ■

First, suppose that T is one-to-one, and let v ∈ ker(T). We must show that v = 0V. Now, T(v) = 0U. However, T is a linear transformation and it maps zero vector into zero vector, so T(0) = 0U. Because T(v) = T(0) = 0U and T is one-to-one, we must have v = 0V.

Conversely, suppose that ker(T) = {0U}. We must show that T is one-to-one. Let v1, v2V, with T(v1) = T(v2). We must show that v1 = v2. Since T(v1) - T(v2) = T(v1 - v2) = 0U, we conclude that T(v1 - v2) = 0U. However, then v1 - v2 ∈ ket(T), by definition of the kernel. Because ker(T) = {0U}, we derive that v1 - v2 = 0V and so v1 = v2.

Example: Let us find the kernel of the 4-by-6 matrix
\[ {\bf A} = \begin{bmatrix} 1& 2& 3& -2& -1& 2 \\ 2& -1& 2& 3& 2& -3 \\ 3& 1& 2& -1& 3& -5 \\ 5& 5& 5& -7& 3& -5 \end{bmatrix} \]
The first step in finding the kernel of the given matrix is to determine its pivots by performing elementary row operations. So we multiply the first row by -2 and add to the second row; then we multiply the first row by -3 and add to the third row; finally, we multiply the first row by -5 and add to the last row. It results in the following matrix
\[ {\bf A}_2 = \begin{bmatrix} 1& 2& 3& -2& -1& 2 \\ 0& -5& -4& 7& 4& -7 \\ 0& -5& -7& 5& 6& -11 \\ 0& -5& -10& 3& 8& -15 \end{bmatrix} \]
We check with Mathematica:

A = {{1, 2, 3, -2, -1, 2}, {2, -1, 2, 3, 2, -3}, {3, 1, 2, -1, 
   3, -5}, {5, 5, 5, -7, 7, -4}} 
A2 = A;
A2[[2]] += (-2)*A2[[1]]
A2[[3]] += (-3)*A2[[1]]
A2[[4]] += (-5)*A2[[1]]
A2 // MatrixForm
Out[5]= \( \displaystyle \quad \begin{pmatrix} 1& 2& 3& -2& -1& 2 \\ 0& -5& -4& 7& 4& -7 \\ 0& -5& -7& 5& 6& -11 \\ 0& -5& -10& 3& 8& -15 \end{pmatrix} \)
Next we multiply the second row by -1 and add to the third and fourth rows, which yields
\[ {\bf A}_3 = \begin{bmatrix} 1& 2& 3& -2& -1& 2 \\ 0& -5& -4& 7& 4& -7 \\ 0& 0& -3& -2& 2& -4 \\ 0& 0& -6& -4& 4& -8 \end{bmatrix} \]
Again, Mathematica helps

A3 = A2;
A3[[3]] += (-1)*A3[[2]]
A3[[4]] += (-1)*A3[[2]]
Finally, we add last two rows with a multiple (-2):

A4 = A3;
A4[[4]] += (-2)*A4[[3]]
A4 // MatrixForm
Out[8]= \( \displaystyle \quad \begin{pmatrix} 1& 2& 3& -2& -1& 2 \\ 0& -5& -4& 7& 4& -7 \\ 0& 0& -3& -2& 2& -4 \\ 0& 0& 0& 0& 0&0 \end{pmatrix} \)
This tells us that matrix A has three pivots in the first three rows, and its rank is 3. To use the above theorem, we need its Gauss--Jordan form, which we obtain with just one Mathematica command:

RowReduce[A]
Out[9]= {{1, 0, 0, -(2/15), 23/15, -(8/3)}, {0, 1, 0, -(29/15), -(4/15), 1/ 3}, {0, 0, 1, 2/3, -(2/3), 4/3}, {0, 0, 0, 0, 0, 0}}
It allows us to represent reduced row echelon form R of the given matrix in the block form:
\[ {\bf A} \, \sim \, {\bf R} = \begin{bmatrix} {\bf I} & {\bf F} \\ {\bf 0} & {\bf 0} \end{bmatrix} , \]
where I is the identity 3-by-3 matrix, 0 is the zero 3-row vector [0, 0, 0], and matrix F is the following square matrix:
\[ {\bf F} = \frac{1}{15} \begin{bmatrix} 2 & 23 & -40 \\ 29 & -4 & 5 \\ 10 & -10 & 20 \end{bmatrix} , \]
Using Mathematica, we extract matrix F:

F = R[[1 ;; 3, 4 ;; 6]] // MatrixForm
Out[10]= \( \displaystyle \quad \begin{pmatrix} -\frac{2}{15}&\frac{23}{15}&-\frac{8}{3} \\ -\frac{29}{15}&-\frac{4}{15}&\frac{1}{3} \\ \frac{2}{3}&-\frac{2}{3}& \frac{4}{3} \end{pmatrix} \)
To avoid fractions, we multiply matrix F by 15 to obtain

F15 = 15*R[[1 ;; 3, 4 ;; 6]] // MatrixForm
Out[11]= \( \displaystyle \quad \begin{pmatrix} -2&23&-40 \\ -29&-4&5 \\ 10&-10&20 \end{pmatrix} \)
Upon appending the identity matrix to -F, we obtain three linearly independent vectors that generate the kernel:
\[ \begin{bmatrix} -{\bf F} \\ {\bf I}_{3 \times 3} \end{bmatrix} . \]
Each column vector from the above 6-by-3 matrix belongs to the null space of matrix A. Since columns are linearly independent, they form the basis of the kernel.

nul = ArrayFlatten[{{-F}, {IdentityMatrix[3]}}] 
Out[12]= {{2/15, -(23/15), 8/3}, {29/15, 4/15, -(1/3)}, {-(2/3), 2/ 3, -(4/3)}, {1, 0, 0}, {0, 1, 0}, {0, 0, 1}}
To avoid fractions, we multiply by 15 each entry to obtain three vectors that span the null space:
\[ {\bf v}_1 = \begin{bmatrix} 2 \\ 29 \\ -10 \\ 15 \\ 0 \\ 0 \end{bmatrix} , \quad {\bf v}_2 = \begin{bmatrix} -23 \\ 4 \\ 10 \\ 0 \\ 15 \\ 0 \end{bmatrix} , \quad {\bf v}_3 = \begin{bmatrix} 40 \\ -5 \\ -20 \\ 0 \\ 0 \\ 15 \end{bmatrix} = 5 \begin{bmatrix} 8 \\ -1 \\ -4 \\ 0 \\ 0 \\ 3 \end{bmatrix}. \]
We check our answer with the standard Mathematica command

NullSpace[A]
Out[12]= {{8, -1, -4, 0, 0, 3}, {-23, 4, 10, 0, 15, 0}, {2, 29, -10, 15, 0, 0}}
It is possible to determine the null space for the given matrix directly without using the above theorem. We know that x1, x2, and x3 are leading variables and x4, x5, x6 are free variables. Since rank(A) is 3, the last row of matrix A does not play any role in determination of solutions for the linear equation A x = b. So we extract two matrices from the first three rows of A:
\[ {\bf B} = \begin{bmatrix} 1&2&3 \\ 2&-1& 2 \\ 3&1&2 \end{bmatrix} , \quad {\bf C} = \begin{bmatrix} -2&-2&2 \\ 3&2&-3 \\ -1&3&-5 \end{bmatrix} . \]

B = A[[1 ;; 3, 1 ;; 3]] // MatrixForm
Out[13]= \( \displaystyle \quad \begin{pmatrix} 1&2&3 \\ 2&-1&2 \\ 3&1&2 \end{pmatrix} \)

CC = A[[1 ;; 3, 4 ;; 6]] // MatrixForm
Out[14]= \( \displaystyle \quad \begin{pmatrix} -2&-1&2 \\ 3&2&-3 \\ -1&3&-5 \end{pmatrix} \)
We used a special extension "MatrixForm" just to show matrices in their regular forms. For actual calculations, this extension should be dropped because it converts a list of vectors into one object. Multiplying the inverse of matrix B by our matrix C, we obtain our old friend: matrix F:
\[ {\bf F} = {\bf B}^{-1} {\bf C} = \frac{1}{15} \begin{bmatrix} -2&23&-40 \\ -29&-4&5 \\ 10&-10&20 \end{bmatrix} \]

 

Example: We consider a slightly different matrix
\[ {\bf A} = \begin{bmatrix} 1& 2& 3& -2& -1& 2 \\ 2& -1& 2& 3& 2& -3 \\ 3& 1& 2& -1& 3& -5 \\ 5& 5& 5& -7& 7& -4 \end{bmatrix} \]
The first step in finding the kernel of the given matrix is to determine its pivots by performing elementary row operations. We use the Gaussian elimination to obtain the row reduced form:

A = {{1, 2, 3, -2, -1, 2}, {2, -1, 2, 3, 2, -3}, {3, 1, 2, -1, 
   3, -5}, {5, 5, 5, -7, 7, -4}}
R = RowReduce[A]
Out[2]= \( \displaystyle \quad \begin{pmatrix} 1&0&0&-\frac{2}{15} &0&-\frac{61}{20} \\ 0&1&0&-\frac{29}{15}&0&\frac{2}{5} \\ 0&0&1&\frac{2}{3}&0&\frac{3}{2} \\ 0&0&0&0&1&\frac{1}{4} \end{pmatrix} \)
Therefore, the given matrix has four pivots and its rank is

MatrixRank[A]
Out[2]= 4
The 1's on the main diagonal of matrix R indicate that variables 1, 2, 3, and 5 are leading variables, while variables 4 and 6 are free variables. To find actual vectors that span the null space, we form two auxiliary matrices: 4-by-4 matrix B that contain columns of matrix A containing the leading variables, and 4-by-2 matrix C that corresponds to free variables. Naturally, we ask Mathematica for help. We build matrix B in two steps: first we extract 4-by-5 matrix from A by dropping last column, and then eliminate fifth column:

B1 = A[[1 ;; 4, 1 ;; 5]] 
B = Transpose[Drop[Transpose[B1], {4}]]
Out[4]= \( \displaystyle \quad \begin{pmatrix} 1&2&3&-1 \\ 2&-1&2&2 \\ 3&1&2&3 \\ 5&5&5&7 \end{pmatrix} \)
Note that matrix B1 could be obtained with the following command:

B1 = Delete[A, {1, 6}]

C1 = A[[1 ;; 4, 4 ;; 6]]
CC = Transpose[Delete[Transpose[C1], {2}]] 
Out[6]= \( \displaystyle \quad \begin{pmatrix} -2&2 \\ 3&-3 \\ -1&-5 \\ -7&-4 \end{pmatrix} \)
We these matrices, we can rewrite equations to determine the kernel as
\[ {\bf B}_{4\times 4} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_5 \end{bmatrix} + {\bf C}_{4\times 2} \begin{bmatrix} x_4 \\ x_6 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \end{bmatrix} . \]
Then we can express the leading variables via free variables as
\[ \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_5 \end{bmatrix} = - {\bf B}^{-1} {\bf C} \begin{bmatrix} x_4 \\ x_6 \end{bmatrix} = - {\bf F} \begin{bmatrix} x_4 \\ x_6 \end{bmatrix} , \]
where F = B-1 C. Then we build \( 4 \times 2 \) matrix F. It can be obtained either by multiplication of matrices:
\[ {\bf F} = {\bf B}^{-1} {\bf C} = \begin{bmatrix} -\frac{2}{15} & -\frac{61}{20} \\ -\frac{29}{15} & \frac{2}{5} \\ \frac{2}{3} & \frac{3}{2} \\ 0&\frac{1}{4} \end{bmatrix} \]
or extracting fourth and sixth columns from matrix R. To avoid fractions, we multiply this matrix by 60:

F60 = Inverse[B].CC*60 
Out[8]= \( \displaystyle \quad \begin{pmatrix} -8&-183 \\ -116&24 \\ 40&90 \\ 0&15 \end{pmatrix} \)
Finally, we append the identity 2-by-2 matrix to -F that forms the 6-by-2 matrix from which we are going to extract two basis vectors:
\[ \begin{bmatrix} -{\bf F} \\ {\bf I} \end{bmatrix} \qquad \mbox{or} \qquad \begin{bmatrix} -60\,{\bf F} \\ 60\,{\bf I} \end{bmatrix} = \begin{bmatrix} 8&183 \\ 116&-24 \\ -40&-90 \\ 0&-15 \\ 60&0 \\ 0&60 \end{bmatrix} . \]
This is not a correct matrix because we need to make one more operation: swap fourth and fifth rows (because the pivot is in fifth column but not in the fourth one):
\[ {\bf nul} = \begin{bmatrix} 8&183 \\ 116&-24 \\ -40&-90 \\ 60&0 \\ 0&-15 \\ 0&60 \end{bmatrix} \qquad \Longrightarrow \qquad {\bf v}_1 = \begin{bmatrix} 8 \\ 116 \\ -40 \\ 60 \\ 0 \\ 0 \end{bmatrix} = 4 \begin{bmatrix} 2 \\ 29 \\ -10 \\ 15 \\ 0 \\ 0 \end{bmatrix} , \quad {\bf v}_2 = \begin{bmatrix} 183 \\ -24 \\ -90 \\ 0 \\ -15 \\ 60 \end{bmatrix} = 3 \begin{bmatrix} 61 \\ -8 \\ -30 \\ 0 \\ -5 \\ 20 \end{bmatrix} . \]
We check with Mathematica that each column vector from the above 6-by-2 matrix is annihilated by A:

A.{{8}, {116}, {-40}, {60}, {0}, {0}}
A.{{183}, {-24}, {-90}, {0}, {-15}, {60}}
Since both answers are zero vectors, we are positive that the basis for null space is found properly. Now we compare with the answer provided by standard Mathematica command

NullSpace[A]
Out[16]= {{61, -8, -30, 0, -5, 20}, {2, 29, -10, 15, 0, 0}}
As we see, both vectors differ by a constant multiple. ■

 

Example: We consider the matrix
\[ {\bf A} = \begin{bmatrix} 1& -1& 3& 1& -1 \\ 2& 4& 0& -1& 7 \\ 3& 1& 5& 1& 3 \\ 4& 6& 2& -1& 11 \end{bmatrix} , \]
which has rank 2. Indeed,

A = {{1, -1, 3, 1, -1}, {2, 4, 0, -1, 7}, {3, 1, 5, 1, 3}, {4, 6, 2, -1, 11}}
MatrixRank[A]
Out[2]= 2
Its reduced row echelon form is
\[ {\bf A} \, \sim \, {\bf R} = \begin{bmatrix} 1& 0& 2& \frac{1}{2}&\frac{1}{2} \\ 0& 1& -1& -\frac{1}{2}& \frac{3}{2} \\ 0& 0& 0& 0& 0 \\ 0& 0& 0& 0&0 \end{bmatrix} , \]
because

R = RowReduce[A]
Out[2]= {{1, 0, 2, 1/2, 1/2}, {0, 1, -1, -(1/2), 3/2}, {0, 0, 0, 0, 0}, {0, 0, 0, 0, 0}}
We extract from R the 2-by-3 matrix, which we denote by F
\[ {\bf F} = \begin{bmatrix} 2& \frac{1}{2}&\frac{1}{2} \\ -1& -\frac{1}{2}& \frac{3}{2} \end{bmatrix} . \]

F = Take[R, {1, 2}, {3, 5}]
Out[3]= {{2, 1/2, 1/2}, {-1, -(1/2), 3/2}}
Upon appending the identity 3-by-3 matrix, we obtain the required matrix, each column of which is a basis vector for the null space:
\[ {\bf null} = \begin{bmatrix} - {\bf F} \\ {\bf I}_{3\times 3} \end{bmatrix} = \begin{bmatrix} -2& - \frac{1}{2}&- \frac{1}{2} \\ 1& \frac{1}{2}& -\frac{3}{2} \\ 1&0&0 \\ 0&1&0 \\ 0&0&1 \end{bmatrix} . \]

nul = ArrayFlatten[{{-F}, {IdentityMatrix[3]}}] 
Out[4]= \( \displaystyle \quad \begin{pmatrix} -2&-\frac{1}{2} & -\frac{1}{2} \\ 1&\frac{1}{2} & - \frac{3}{2} \\ 1&0&0 \\ 0&1&0 \\ 0&0&1 \end{pmatrix} \)
Now we compare columns in the above matrix against the standard Mathematica output:

NullSpace[A]
Out[5]= {{-1, -3, 0, 0, 2}, {-1, 1, 0, 2, 0}, {-2, 1, 1, 0, 0}}
Example 9: To find the null space of a matrix, reduce it to echelon form as described in Part 1. To refresh your memory, the first nonzero elements in the rows of the echelon form are the pivots. Solve the homogeneous system by back substitution as also described in the mentioned above part. To remind you solve for pivot variables. The variables without pivots cannot be solved for and become parameters with arbitrary values in the null space, multiplying "basis vectors." The coefficients inside the basis vectors come from the solved variables.

For example, if your unknowns are { x₁, x₂, x₃, x₄, x₅, x₆ } and your echelon matrix is

\[ \begin{bmatrix} 0 & 3 & -2& 5 & 7 & 4 \\ 0 & 0 & 0 & 2 & 0 & 0 \\ 0 & 0 & 0 & 0 & 6 & 3 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix} . \]
Since its last equation is trivial, you get from the third one
\[ 6\, x_5 + 3\,x_6 = 0 \qquad \Longrightarrow \qquad x_6 = -2\,x_5 . \]
Substituting this value into the second equation, we get
\[ 2\, x_4 = 0 \qquad \Longrightarrow \qquad x_4 = 0. \]
Then from the first equation, we have
\[ 3\,x_2 -2\,x_3 + 5\, x_4 + 7\, x_5 + 4\,x_6 = 0 \qquad \Longrightarrow \qquad x_2 = \frac{1}{3} \left[ 2\,x_3 + x_5 \right] . \]
To get the null space (i.e., the full set of vectors { x₁, x₂, x₃, x₄, x₅, x₆ } that produce zero upon substitution, the variables { x₁, x₃, x₆ } without pivots go in the right hand side as arbitrary constants that can be anything:
\[ \mbox{kernel:} = \begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \\ x_6 \end{pmatrix} = x_1 \begin{pmatrix} 1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{pmatrix} + x_3 \begin{pmatrix} 1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{pmatrix} + x_6 \begin{pmatrix} 1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{pmatrix} . \]
The coefficients for the pivot variables { x₂, x₄, x₅ } in the vectors in the right hand side come from the solved equations, and those for { x₁, x₃, x₆ } from arbitrary values.

To get a basis for the kernel space, you can use the constant vectors in the right hand side:

\[ \mbox{a basis for the null space:} = \begin{pmatrix} 1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{pmatrix} , \quad \begin{pmatrix} 1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{pmatrix} , \quad \begin{pmatrix} 1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{pmatrix} . \]

 


  1. Determine the kernel and range of each of the following linear operators on ℝ³.
    1. T(x) = T(x₁, x₂, x₃) = (x₃, x₂, x₁);
    2. T(x) = T(x₁, xx₃) = (0, xx₃);
    3. T(x) = T(x₁, x₂, x₃) = (x₁, x₁, x₁).
  2. Find the kernel and range of each of the following linear operators on the space of real-valued polynomials of degree at most 3, ℝ≤3[x]:
    1. T(p(x)) = xp′(x);
    2. T(p(x)) = p(xp′(x);
    3. T(p(x)) = xp(1) − p(0);
    4. T(p(x)) = p(0) + xp′(1) + x²p′>′(2).