Every m-by-n matrix A ∈ 𝔽m×n can be considered as a linear operator 𝔽n×1 ⇾ 𝔽m×1, acting from left on column vectors: 𝔽n×1xA x ∈ 𝔽m×1. Since the vector space of column vectors is isomorphic to the Cartesian product, 𝔽n×1 ≌ 𝔽n, this matrix generates a linear transformation A : 𝔽n ⇾ 𝔽m. The reverse statement is also true: to every linear transformation T : 𝔽n ⇾ 𝔽m corresponds a matrix T = 〚T〛∈ 𝔽m×n. Therefore there is a bijection between the vector space of linear transformations ℒ(𝔽n, 𝔽m) and the vector space of matrices 𝔽n×n.

Correspondingly, every m × n matrix generates a dual transformation φ : (𝔽m)' ⇾ (𝔽n)'. So every numerical vector y ∈ 𝔽n generates a covector (also known as bra-vector in Dirac's notation) y* = ⟨ y | that acts on any vector x ∈ 𝔽n as a linear functional:

\[ {\bf y}^{\ast} \left( {\bf x} \right) = \langle {\bf y}\,|\,{\bf x} \rangle = y_1 x_1 + y_2 x_2 + \cdots + y_n x_n \in \mathbb{F} . \]
When scalar field is real, the right-hand part formula is just a dot-product of two vectors:
\[ {\bf y} \bullet {\bf x} = y_1 x_1 + y_2 x_2 + \cdots + y_n x_n \in \mathbb{F} \ne \mathbb{C} . \tag{R} \]
In case of complex vector spaces, the linear combination rule (R) defines a linear functional. In order to introduce a metric, the inner product is used instead of dot product (R), which involves additional operation of complex conjugate:
\[ \langle {\bf y} \,,\, {\bf x} \rangle = \overline{y}_1 x_1 + \overline{y}_2 x_2 + \cdots + \overline{y}_n x_n \in \mathbb{C} , \tag{C} \]
where overline means complex conjugate: \( \displaystyle \overline{a + {\bf j}\,b} = \left( a + {\bf j}\,b \right)^{\ast} = a - {\bf j}\,b . \) Note that complex conjugate operation is also denoted by asterisk in physics and engineering. Here j is the imaginary unit on complex plane ℂ, so j² = −1. Therefore, every complex vector y defines a covector according to formula (R), but also can be considered as a bra-vector via inner product (C). Of course, there is a one-to-one correspondence between linear functional (R) and inner product formula (C). Although linear combination (R) defines a linear functional on any vector space independently on the field of scalars, the inner product rule (C) is not a linear functional in case of complex vector spaces. The dot product expression (R) defines orthogonality only for real vector spaces, but the inner product formula (C) is suitable for its definition in complex case.

When field 𝔽 is real (either ℝ or ℚ or ℤ), a vector space V over 𝔽 can be equipped with dot product rule (R). Then we can say that two numeric vectors x and y from V are orthogonal if

\[ {\bf x} \bullet {\bf y} = x_1 y_1 + x_2 y_2 + \cdots + x_n y_n = 0 . \]
We abbreviate this condition as xy. When vecor space V is complex, two vectors x and y from V are orthogonal if
\[ \langle {\bf x} \,,\, {\bf y} \rangle = \overline{\langle {\bf y} \,,\, {\bf x} \rangle} = \overline{x}_1 y_1 + \overline{x}_2 y_2 + \cdots + \overline{x}_n y_n = 0 . \]
Therefore, the dot product equation xy = 0 is an annihilator relation in case of complex spaces, but not orthogonality.

Matrix spaces

Row space

Range or column space

Null Space

Dimension Theorems

Four Subspaces

We discuss the four fundamental subspaces associated with every m×n matrix over field 𝔽.
Four Fundamental subspaces associated with every m×n matrix A:
  1. The row space of matrix A, which is subspace of 𝔽1×n ≌ 𝔽n. The row space is the column space for the transposed matrix AT.
  2. The column space (also called the range or image when matrix A is considered as a transformation) of matrix A, which is subspace of 𝔽m×1 ≌ 𝔽m. The column space is the row space for the transposed matrix AT.
  3. The nullspace (also called kernel when matrix A is considered as a transformation) of matrix A, which is subspace of 𝔽n×1 ≌ 𝔽n. It consists of all n-vectors x that are mapped by A into zero vector: A x = 0.
  4. The cokernel (also called left nullspace) of A, which is the kernel of AT. It is subspace of 𝔽m. The cokernel includes all solutions of the vector equation yT A = 0 or ATx = 0.

    In case of complex entries of matrix A, linear combination rule (R) does not define a metric on ℂn (but still is a linear functional), the cokernel is defined as the kernel of matrix adjoint, A. It consists of all y ∈ ℂm such that Ay = 0.

Recall that a set of vectors β is said to generate or span a vector space V if every element from V can be represented as a linear combination of vectors from β.

   
Example 1: Consider the matrix
\[ {\bf A} = \begin{bmatrix}3&4&-2&-1 \\ 5&1&-3&-6 \\ 1&4&3&5 \\ -4&9&-2&9 \end{bmatrix} . \]
In order to determine its row space and column space, we apply Gauss-Jordan algorithm to find its reduced row echelon form:
A = {{3, 4, -2, -1}, {5, 1, -3, -6}, {1, 4, 3, 5}, {-4, 9, -2, 9}}
RowReduce[A]
\( \begin{bmatrix} 1&0&0&11/13 \\ 0&1&0&53/65 \\ 0&0&1&56/65 \\ 0&0&0&0 \end{bmatrix} \)
Since this matrix has three pivots in the first three rows, rank of A is 3. Both kernel and cokernel are one dimensional. The row space is spanned on the first three rows either matrix A
\[ \begin{bmatrix} 3&4&-2&-1\end{bmatrix} , \quad \begin{bmatrix} 5&1&-3&-6 \end{bmatrix} , \quad \begin{bmatrix}1&4&3&5 \end{bmatrix} \]
or its Gauss--Jordan form
\[ \begin{bmatrix} 1&0&0&-11/13 \end{bmatrix} , \quad \begin{bmatrix} 0&1&0&53/65 \end{bmatrix} , \quad \begin{bmatrix} 0&0&1&56/65 \end{bmatrix} . \]
The column space is spanned on three columns corresponding pivots:
\[ \begin{bmatrix} 3 \\ 5 \\ 1 \\ -4 \end{bmatrix} , \quad \begin{bmatrix} 4 \\ 1 \\ 4 \\ 9 \end{bmatrix} , \quad \begin{bmatrix} -2 \\ -3 \\ 3 \\ -2 \end{bmatrix} . \]
The easiest way to determine the basis of the kernel is to use the standard Mathematica command:
NullSpace[A]
{{55, -53, -56, 65}}
This vector is orthogonal to each basis vector of the row space. To verify this, we calculate its dot products with every vector generating the row space:
n = {55, -53, -56, 65}
n.{3,4,-2,-1}
0
n.{5, 1, -3, -6}
0
n.{1,4,3,5}
0
n.{13, 0, 0, -11}
0
n.{0, 65, 0, 53}
0
n.{0, 0, 65, 56}
0
Multiplying A from right by the elementary matrix E, we reduce the given matrix to lower triangular form (such operation is equivalent to column elementary operations):
\[ {\bf A}\,{\bf E} = \begin{bmatrix}3&4&-2&-1 \\ 5&1&-3&-6 \\ 1&4&3&5 \\ -4&9&-2&9 \end{bmatrix} \begin{bmatrix} 1&-\frac{4}{3} & \frac{10}{17}&\frac{11}{13} \\ 0&1&\frac{1}{17}&-\frac{53}{65} \\ 0&0&1&-\frac{56}{65} \\ 0&0&0&1 \end{bmatrix} = \begin{bmatrix} 3&0&0&0 \\ 5& -\frac{17}{3}&0&0 \\ 1&\frac{8}{3}& \frac{65}{17}&0 \\ -4&\frac{43}{3}&-\frac{65}{17}&0 \end{bmatrix}. \]
Multiplying both sides of the latter by the inverse matrix E-1, we get its LU-decomposition:
\[ {\bf A} = {\bf L} \,{\bf U} = \begin{bmatrix} 3&0&0&0 \\ 5& -\frac{17}{3}&0&0 \\ 1&\frac{8}{3}& \frac{65}{17}&0 \\ - 4&\frac{43}{3}&-\frac{65}{17}&0 \end{bmatrix} \begin{bmatrix} 1 &\frac{4}{3} & -\frac{2}{3}&-\frac{1}{3} \\ 0&1&-\frac{1}{17}&\frac{13}{17} \\ 0&0&1&\frac{56}{65} \\ 0&0&0&1 \end{bmatrix} . \]
The cokernel is generated by the vector
NullSpace[Transpose[A]]
{{-4, 3, 1, 1}}
Note that this vector is orthogonal (so its dot product is zero) to every vector from the column space. The basis vector [ 55, -53, -56, 65 ] is orthogonal to every vector from the row space.    ■
End of Example 1
   
Example 2: Consider the matrix
\[ {\bf A} = \begin{bmatrix}2&3&1&-2&9 \\ 3&-2&-3&21&-13 \\ -2&-5&2&5&-4 \\ -4&-6&5&-3&3 \end{bmatrix} . \]
In order to determine its row space and column space, we apply Gauss-Jordan algorithm to find its reduced row echelon form:
A = {{2, 3, 1, -2, 9}, {3, -2, -3, 21, -13}, {-2, -5, 2, 5, -4}, {-4, -6, 5, -3, 3}}
RowReduce[A]
\( \begin{bmatrix} 1&0&0&4&0 \\ 0&1&0&-3&2 \\ 0&0&1&-1&3 \\ 0&0&0&0&0 \end{bmatrix} \)
Since the latter has three pivots, rank of matrix A is 3. The nullspace has dimension 5 - 3 = 2, and cokernel is spanned on one vector because 4 - 3 = 1. The row space is spanned on the first three rows of either matrix A
\[ \begin{bmatrix}2&3&1&-2&9 \end{bmatrix} , \quad \begin{bmatrix} 3&-2&-3&21&-13 \end{bmatrix} , \quad \begin{bmatrix} -2&-5&2&5&-4 \end{bmatrix} , \]
or the first three rows of its Gauss-Jordan form:
\[ \begin{bmatrix}1&0&0&4&0 \end{bmatrix} , \quad \begin{bmatrix} 0&1&0&-3&2 \end{bmatrix} , \quad \begin{bmatrix} 0&0&1&-1&3 \end{bmatrix} . \]
In order to determine the basis for kernel and cokernel, we find its LU-decompositions: first time using elementary row operations and then elementary column operations. So multiplying matrix A by the lower triangular matrix E1 from left, we get
\[ {\bf E}_1 = \begin{bmatrix} 1&0&0&0 \\ -3/2&1&0&0 \\ 1&0&1&0 \\ 2&0&0&1 \end{bmatrix} \qquad \Longrightarrow \qquad {\bf E}_1 {\bf A} = \begin{bmatrix}2&3&1&-2&9 \\ 0&-\frac{13}{2}&-\frac{9}{2}&24&-\frac{53}{2} \\ 0&-2&3&3&5 \\ 0&0&7&-7&21 \end{bmatrix} . \]
Next, we multiply from left by E2:
\[ {\bf E}_2 = \begin{bmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0&-\frac{4}{13}&1&0 \\ 0&0&0&1 \end{bmatrix} \qquad \Longrightarrow \qquad {\bf E}_2 {\bf E}_1 {\bf A} = \begin{bmatrix} 2&3&1&-2&9 \\ 0&-\frac{13}{2}&-\frac{9}{2}&24&-\frac{53}{2} \\ 0&0&\frac{57}{13}& - \frac{57}{13}& \frac{171}{13} \\ 0&0&7&-7&21 \end{bmatrix} . \]
The last multiplication by E3 yields
\[ {\bf E}_3 = \begin{bmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0&0&1&0 \\ 0&0&-\frac{91}{57}&1 \end{bmatrix} \qquad \Longrightarrow \qquad {\bf E}_3 {\bf E}_2 {\bf E}_1 {\bf A} = \begin{bmatrix} 2&3&1&-2&9 \\ 0&-\frac{13}{2}&-\frac{9}{2}&24&-\frac{53}{2} \\ 0&0&\frac{57}{13}& - \frac{57}{13}& \frac{171}{13} \\ 0&0&0&0&0 \end{bmatrix}. \]
Multiplying all the matrices, we obtain the lower triangular matrix
\[ {\bf E} = {\bf E}_3 {\bf E}_2 {\bf E}_1 = \begin{bmatrix} 1&0&0&0 \\ -3/2&1&0&0 \\ 19/13&-4/13&1&0 \\ -1/3&28/57&-91/57&1 \end{bmatrix} , \]
Finally, we get
\[ {\bf E}\, {\bf A} = {\bf U} , \]
where
EE = {{1, 0, 0, 0}, {-(3/2), 1, 0, 0}, {19/13, -(4/13), 1, 0}, {-(1/3), 28/ 57, -(91/57), 1}}
(EE.A) // MatrixForm
\( \begin{bmatrix} 2&3&1&-2&9 \\ 0&-\frac{13}{2}& -\frac{9}{2}&24&-\frac{53}{2} \\ 0&0&\frac{57}{13}& -\frac{57}{13}&\frac{171}{13} \\ 0&0&0&0&0 \end{bmatrix} \)
Multiplying from left by the inverse to E, we obtain LU-decomposition:
\[ {\bf A} = {\bf L}\,{\bf U} = \begin{bmatrix} 1&0&0&0 \\ \frac{3}{2}&1&0&0 \\ -1&\frac{4}{13}&1&0 \\ -2&0&\frac{91}{57}&1 \end{bmatrix} \begin{bmatrix} 2&3&1&-2&9 \\ 0&-\frac{13}{2}& -\frac{9}{2}&24&-\frac{53}{2} \\ 0&0&\frac{57}{13}& -\frac{57}{13}&\frac{171}{13} \\ 0&0&0&0&0 \end{bmatrix} , \]
where L = E-1.
U = EE.A
Inverse[EE].U
In \( \mathbb{R}^n , \) the vectors \( e_1 [1,0,0,\ldots , 0] , \quad e_2 =[0,1,0,\ldots , 0], \quad \ldots , e_n =[0,0,\ldots , 0,1] \) form a basis for n-dimensional real space, and it is called the standard basis. Its dimension is n.    ■
End of Example 2
We now review the main components used in the fundamental theorem, which includes each of the four subspaces associated with a matrix A ∈ 𝔽m×n.

 

Row Spaces


 

Column Spaces


 

Nullspaces


Many different matrices have the same nullspace. So if we want, for instance, to describe a subspace S as the nullspace of a matrix A, there will always be (infinitely) many correct answers. Indeed, elementary row operations don’t change the row-space or the nullspace of a matrix, so once we have S = NullSpace(A), we also have S = NullSpace(B) for every matrix B obtainable by applying elementary row operations to A.

Similarly, if we start with S = NullSpace(A), convert to the explicit description row(HT) = NullSpace(A), and then convert back by finding the homogeneous generators of HT and making them the rows of a new matrix B, we don’t expect to get back our original matrix A (we will not generally have A = B). While A and B will have the same nullspace, they will not generally be the same matrix. Likewise, if we start with S = row(A), convert to the implicit description row(A) = NullSpace(HT), and then convert back to explicit by making the nullspace generators of HT the rows of a new matrix B, we will have row(B) = row(A), even though AB in general.

   
Example 4: Example 3: We consider 2 × 4 matrix \[ {\bf A} = \begin{bmatrix} 1&\phantom{-}2&3&\phantom{-}4 \\ 1 & -2 & 3 & -4 \end{bmatrix} . \] Suppose that subspace S is generated by two rows of matrix A: \[ S = \mbox{row}({\bf A}) \in \mathbb{R}^{1\times 4} \cong \mathbb{R}^{4} . \] Using row operations, we convert it to \[ {\bf A} \,\sim \,\begin{bmatrix} 1&0&3&0 \\ 0 & 1 & 0 & 2 \end{bmatrix} . \]
A = {{1, 2, 3, 4}, {1, -2, 3, -4}};
RowReduce[A]
{{1, 0, 3, 0}, {0, 1, 0, 2}}
Then \[ S = \mbox{NullSpace}({\bf B}) , \qquad {\bf B} = \begin{bmatrix} 1&0&3&0 \\ 0 & 1 & 0 & 2 \end{bmatrix} . \] If we now convert back to explicit in the usual way, by computing the homogenous generators of this last matrix—they are (1, 0, 3, 0) and (0, 1, 0, 2) —we get    ■
End of Example 3

 

Cokernel


 

   
Example 3: Let us consider the following 3×5 matrix followed by its LU-decomposition:
\[ {\bf A} = \begin{bmatrix}1&-2&0&6&7 \\ 2&-4&1&10&17 \\ -4&8&3&-30&-19 \end{bmatrix} = \begin{bmatrix}1&0&0 \\ 2&1&0 \\ -4&3&1 \end{bmatrix} \begin{bmatrix} 1&-2&0&6&7 \\ 0&0&1&-2&3 \\ 0&0&0&0&0 \end{bmatrix} = {\bf L}\,{\bf U}. \]
The first square matrix L is actually the inverse of the elementary matrix that reduces the given 3×5 matrix into upper triangular form (which is its reduced row echelon form):
\[ {\bf L} = \begin{bmatrix}1&0&0 \\ 2&1&0 \\ -4&3&1 \end{bmatrix}^{-1} = \begin{bmatrix}1&0&0 \\ -2&1&0 \\ 10&-3&1 \end{bmatrix} . \]
We can make the first easy observation that matrix A has two pivots; so its rank is 2. The dimension of the kernel is 5 - 2 = 3, and the dimension of the cokernel is 3 - 2 = 1. Both spaces, row space and column space, are two dimensional.

The row space for matrix A is spanned on its first two rows or on rows [1,-2,0,6,7] and [0,0,1,-2,3] of its upper triangular matrix U.

The column space has the basis of two column vectors corresponding to pivots in positions 1 and 3:

\[ \begin{bmatrix}1 \\ 2 \\ -4 \end{bmatrix} \qquad\mbox{and} \qquad \begin{bmatrix}0 \\ 1 \\ 3 \end{bmatrix} . \]

The basis for cokernel of A is read from the last row of matrix E: [ 10, -3, 1 ]. Indeed, cokernel of A is the nullspace of the transposed matrix AT:

A = {{1, -2, 0, 6, 7}, {2, -4, 1, 10, 17}, {-4, 8, 3, -30, -19}}
NullSpace[Transpose[A]]
{{10, -3, 1}}
This vector is orthogonal to either [ 1, 2, -4 ] or [ 0, 1, 3 ], which is the basis for column space.

To find its nullspace, we have to solve the system of equations

\[ \begin{split} x_1 &= 2\,x_2 -6\, x_4 -7\,x_5 , \\ x_3 &= 2\, x_4 -3\, x_5 , \end{split} \]
with respect to leading variables x1 and x3, while x2, x4, and x5 are free variables. Therefore, the solution of the linear system of equations Ax = 0 yields
\[ \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \end{bmatrix} = \begin{bmatrix} 2\,x_2 -6\, x_4 -7\,x_5 \\ x_2 \\ 2\, x_4 -3\, x_5 \\ x_4 \\ x_5 \end{bmatrix} = x_2 \begin{bmatrix} 2 \\ 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} + x_4 \begin{bmatrix} -6 \\ 0 \\ 2 \\ 1 \\ 0 \end{bmatrix} + x_5 \begin{bmatrix} -7 \\ 0 \\ -3 \\ 0 \\ 1 \end{bmatrix} = x_2 {\bf v}_3 + x_4 {\bf v}_4 + x_5 {\bf v}_5 . \]
So we conclude that three dimensional nullspace for matrix A is spanned on three vectors v2, v4 and v5. WE check the answer with Mathematica:
A = {{1, -2, 0, 6, 7}, {2, -4, 1, 10, 17}, {-4, 8, 3, -30, -19}}
NullSpace[A]
{{-7, 0, -3, 0, 1}, {-6, 0, 2, 1, 0}, {2, 1, 0, 0, 0}}
The third option to determine a basis for kernel is to use elementary column operations to obtain its LU-factorization. Indeed, multiplying the given matrix A from right by
\[ {\bf E}_1 = \begin{bmatrix} 1&2&0&-6&-7 \\ 0&1&0&0&0 \\ 0&0&1&0&0 \\ 0&0&0&1&0 \\ 0&0&0&0&1\end{bmatrix} \]
we obtain
E1 = {{1,2,0,-6,-7}, {0,1,0,0,0}, {0,0,1,0,0}, {0,0,0,1,0}, {0,0,0,0,1}}
(A.E1) // MatixForm
\( \begin{bmatrix} 1&0&0&0&0 \\ 2&0&1&-2&3 \\ 4&0&3&-6&9 \end{bmatrix} \)
Then we multiply from right by
\[ {\bf E}_2 = \begin{bmatrix} 1&0&0&0&0 \\ 0&1&0&0&0 \\ 0&0&1&2&-3 \\ 0&0&0&1&0 \\ 0&0&0&0&1\end{bmatrix} \]
This yields
E2 = {{1, 0, 0, 0, 0}, {0, 1, 0, 0, 0}, {0, 0, 1, 2, -3}, {0, 0, 0, 1, 0}, {0, 0, 0, 0, 1}}
(A.E1.E2) // MatixForm
\( \begin{bmatrix} 1&0&0&0&0 \\ 2&0&1&0&0 \\ -4&0&3&0&0 \end{bmatrix} \)
Finally, we take inverse to obtain
\[ {\bf A} = {\bf L}\, {\bf U} = {\bf L} \left( {\bf E}_1 {\bf E}_2 \right)^{-1} = \begin{bmatrix} 1&0&0&0&0 \\ 2&0&1&0&0 \\ -4&0&3&0&0 \end{bmatrix} \begin{bmatrix} 1&-2&0&6&7 \\ 0&1&0&0&0 \\ 0&0&1&-2&3 \\ 0&0&0&1&0 \\ 0&0&0&0&1 \end{bmatrix} . \]
Since pivots in matrix
\[ {\bf U} = \left( {\bf E}_1 {\bf E}_2 \right)^{-1} = \begin{bmatrix} 1&-2&0&6&7 \\ 0&1&0&0&0 \\ 0&0&1&-2&3 \\ 0&0&0&1&0 \\ 0&0&0&0&1 \end{bmatrix} \]
are situated in the first and third columns, a basis for nullspace is determined by by columns in the second, fourth, and fifth columns. However, since there was one swap of pivot columns (second and third), we need to switch signs of ones on the diagonal and read the basis of kernel as
\[ \begin{bmatrix} -2 \\ -1 \\ 0 \\ 0 \\ 0 \end{bmatrix}, \quad \begin{bmatrix} 6 \\ 0 \\ -2 \\ -1 \\ 0 \end{bmatrix}, \quad \begin{bmatrix} 7 \\ 0 \\ 3 \\ 0 \\ -1 \end{bmatrix}, \]
which generate the nullspace of A. Each of these three vectors is orthogonal (its dot product is zero) to every row vector of matrix A.    ■
End of Example 3
   
Example 4: We reconsider the previous example with matrix
\[ {\bf A} = \begin{bmatrix}1&-2&0&6&7 \\ 2&-4&1&10&17 \\ -4&8&3&-30&-19 \end{bmatrix} . \]
First, we obtain its LU-factorization using elementary column operations. Multiplying A from right by the elementary 5×5 matrix E, we reduce the given matrix to lower triangular form:
\[ {\bf A}\, {\bf E} = \begin{bmatrix}1&-2&0&6&7 \\ 2&-4&1&10&17 \\ -4&8&3&-30&-19 \end{bmatrix} \begin{bmatrix}1&2&0&-6&-7 \\ 0&1&0&0&0 \\ 0&0&1&2&-3 \\ 0&0&0&1&0 \\ 0&0&0&0&1 \end{bmatrix} = {\bf L} = \begin{bmatrix} 1&0&0&0&0 \\ 2&0&1&0&0 \\ -4&0&3&0&0 \end{bmatrix} . \]
Multiplying the last equation from right by L = E-1, we obtain its UL-decomposition:
\[ {\bf A} = {\bf L}\, {\bf E}^{-1} = {\bf L}\, {\bf U} = \begin{bmatrix} 1&0&0&0&0 \\ 2&0&1&0&0 \\ -4&0&3&0&0 \end{bmatrix} \begin{bmatrix} 1&-2&0&6&7 \\ 0&1&0&0&0 \\ 0&0&1&-2&3 \\ 0&0&0&1&0 \\ 0&0&0&0&1 \end{bmatrix} . \]
A = {{1, -2, 0, 6, 7}, {2, -4, 1, 10, 17}, {-4, 8, 3, -30, -19}}
E1 = {{1, 2, 0, -6, -7}, {0, 1, 0, 0, 0}, {0, 0, 1, 2, -3}, {0, 0, 0, 1, 0}, {0, 0, 0, 0, 1}}
(A.E1) // MatrixForm
(Inverse[E1]) // MatrixForm
The matrix L contains all information about pivots---there are two of them in rows 1 and 2. So rank of matrix A is 2. Knowing this number allows one to determine the dimensions of the kernel (5 - 2 =3) and cokernel (3 - 2 = 1).

Since we use column elementary operations, the columns 1 and 3 in matrix L provide the basis in columns space:

\[ \begin{bmatrix}1 \\ 2 \\ -4 \end{bmatrix} \qquad\mbox{and} \qquad \begin{bmatrix}0 \\ 1 \\ 3 \end{bmatrix} . \]
The basis of row space includes the first two rows of matrix A.

The basis for kernel of A is obtained by reading the second, fourth, and fifth columns (that correspond to free variables) in matrix U:

\[ \begin{bmatrix} 2 \\ 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} , \qquad \begin{bmatrix} -6 \\ 0 \\ 2 \\ 1 \\ 0 \end{bmatrix} , \qquad \begin{bmatrix} -7 \\ 0 \\ -3 \\ 0 \\ 1 \end{bmatrix} . \]
To find the cokernel, we have to solve the system of equations ATy = 0, which leads to
\[ {\bf A}^{\mathrm T} {\bf y} = {\bf 0} \qquad \Longleftrightarrow \qquad {\bf U}^{\mathrm T} {\bf L}^{\mathrm T} {\bf y} = {\bf 0} . \]
Since U is a nonsingular matrix (it is tridiagonal with all diagonal elements no zero---ones), the system is equivalent to
\[ {\bf L}^{\mathrm T} {\bf y} = {\bf 0}\qquad \Longrightarrow \qquad \begin{split} x_1 + 2\, x_2 -4\, x_4 &=0 , \\ x_2 + 3\, x_3 &=0 . \end{split} \]
Solving the latter, we obtain the one nontrivial vector from cokernel space:
\[ \left[ 10 , -3, 1 \right]^{\mathrm T} . \]
   ■
End of Example 4

 

   
Example 5: The infinite set of monomials \( \left\{ 1, x, x^2 , \ldots , x^n , \ldots \right\} \) form a basis in the set of all polynomials.    ■
End of Example 5

Fundamental theorem of Linear Algebra: The nullspace is the orthogonal complement of the row space (in 𝔽n).
The cokernel is the orthogonal complement of the column space (in 𝔽m).

   
Example 5: The infinite set of monomials \( \left\{ 1, x, x^2 , \ldots , x^n , \ldots \right\} \) form a basis in the set of all polynomials.    ■
End of Example 5
   
Example 5: The infinite set of monomials \( \left\{ 1, x, x^2 , \ldots , x^n , \ldots \right\} \) form a basis in the set of all polynomials.    ■
End of Example 5

 

 

  1. Axler, Sheldon Jay (2015). Linear Algebra Done Right (3rd ed.). Springer. ISBN 978-3-319-11079-0.