es

Row operations

Duality in matrix/vector multiplication

Elementary reduction operations

Elementary column operations


Elementary Row Operations

A system of linear equations written in standard form is not suitable for performing operations with computer solvers. At the same time, we observe that elementary reduction operations involve only the coefficients of the unknown variables and the right-hand sides of the linear equations, but not variables. Therefore, the unknown variables are dummy variables and can be replaced by any characters---it does not matter how to denote unknown variables. Hence, the system is completely identified by matrix A = [𝑎i,j] of its coefficients and the constant term b. So a matrix that includes the coefficient matrix A and the right-hand vector b can serve as a device for representing and solving a system of equations.

The augmented matrix of the m × n system A x = b is obtained by taking the coefficient m × n matrix A and adjoining b as an additional (n + 1)th column: We indicate the augmented matrix like this: [A | b] or just [A b].
Lazy people like me do not write a vertical line separating coefficient matrix A from appending vector or matrix. The augmented matrix contains all the information we need to reconstruct the original system.
Example 7: Write the augmented matrix for the given system of equations. \begin{align*} 2\,y - 3\, z &= 5, \\ 4\,x - 3\, y + 2\, z &= 7, \\ 2\, x + 8\, y + 4\,z &= 3 . \end{align*} The augmented matrix displays the coefficients of the variables and an additional column for the constants: \[ \left[ \begin{array}{ccc|c} 0&2&3&5 \\ 4&-3&2&7 \\ 2&8&4&3 \end{array} \right] . \]

Suppose you are given an augmented matrix without separating vertical line: \[ \begin{bmatrix} 3&0&-2&5&6 \\ 0&4&9&8&4 \\ 1&2&0&-3&7 \end{bmatrix} . \] In order to restore the system of linear equations, we need to identify unknown variables. So we denote them as x₁, x₂, x₃, x₄ because there are five entries in a row. Then we append these variables to each column entry of the given augmented matrix to obtain the required system of equations: \begin{align*} 3\,x_1 - 2\,x_3 + 5\,x_4 &= 6 , \\ 4\,x_2 + 9\, x_3 + 8\, x_4 &= 4 , \\ x_1 + 2\, x_2 - 3\, x_4 &= 7 . \end{align*}

End of Example 7
Maxime Bôcher
The first known use of augmented matrices appeared around 200 B.C. in a Chinese manuscript entitled Nine Chapters of Mathematical Art. In this manuscript, the augmented matrix was written in dual form; hence, elementary operations were performed on columns rather than rows, as we use right now.

The actual use of the term augmented matrix was evolved by the American mathematician Maxime Bôcher (1867--1918) in his book Introduction to Higher Algebra, published in 1907. Bôcher was an outstanding expositor of mathematics whose elementary textbooks were greatly appreciated by students. His achievements were documented by William F. Osgood (1919). In addition to being an outstanding research mathematician and lecturer, Maxime Bôcher was an expert in Latin, chemistry, philosophy, zoology, geography, and music.

We have seen the elementary operations for solving systems of linear equations. If we choose to work with augmented matrices instead, the elementary operations for systems of equations should be translated to the following elementary row operations for matrices:

An elementary row operation on a matrix M is any one of the following three ways of modifying M to produce a new equivalent matrix.
  1. Interchanging the positions of two rows. We abbreviate it as Ri ↕ Rj.
  2. Multiplying one row of the matrix by a nonzero scalar. This operation is denoted as Ric·Ri for row i.
  3. Adding a constant multiple of a row to another row. The following notation is used: Ri ← Ri + c·Rj.
Note that it is safe to swap any two columns in the coefficient matrix, but you have to match it with the corresponding change in the order of entries of the unknown vector.

Example 8: We are going to demonstrate how solving the system by using elementary operations corresponds to transforming the augmented matrix using elementary row operations. At the beginning, the system and the corresponding augmented matrix are:
\( \displaystyle \begin{split} 2\,x_2 - x_3 &= \phantom{-}5, \\ 3\, x_1 - \phantom{5}x_2 + 4\, x_3 &= -3, \\ 2\, x_1 -5\, x_2 - 3\, x_3 &= -5. \end{split} \)    ⇒    \( \displaystyle \left[ \begin{array}{ccc|c} 0&\phantom{-}2&-1&\phantom{-}5 \\ 3&-1&\phantom{-}4&-3 \\ 2&-5&-3&-5 \end{array} \right] \)
Multiplying the first equation by ½ corresponds to multiplying the first row of the augmented matrix by ½, which is notated by R₁ ← ½·R₁. These operations result in:
\( \displaystyle \begin{split} x_2 - \frac{1}{2}\,x_3 &= \frac{5}{2} , \\ 3\, x_1 - x_2 + 4\, x_3 &= -3, \\ 2\, x_1 -5\, x_2 - 3\, x_3 &= -5. \end{split} \)    ⇒    \( \displaystyle \left[ \begin{array}{ccc|c} 0&\phantom{-}1 &-\frac{1}{2}& \phantom{-}\frac{5}{2} \\ 3&-1&\phantom{-}4&-3 \\ 2&-5&-3&-5 \end{array} \right] \)
Then, we add the first equation to the second equation. This corresponds to adding the first row of the augmented matrix to the second row of the matrix, which is abbreviated by R₂ ← R₂ + R₁. We obtain
\( \displaystyle \begin{split} x_2 - \frac{1}{2}\,x_3 &= \phantom{-}\frac{5}{2} , \\ 3\, x_1 + \frac{7}{2}\, x_3 &= -\frac{1}{2}, \\ 2\, x_1 -5\, x_2 - 3\, x_3 &= -5. \end{split} \)    ⇒    \( \displaystyle \left[ \begin{array}{ccc|c} 0&\phantom{-}1 &-\frac{1}{2}& \phantom{-}\frac{5}{2} \\ 3&\phantom{-}0&\frac{7}{2}&-\frac{1}{2} \\ 2&-5&-3&-5 \end{array} \right] \)
Next, we add 5 times the first equation to the third equation. This corresponds to adding 5 times the first row of the matrix to the third row of the matrix, which is notated by R₃ ← R₃ + (5)·R₁ or more simply R₃ ← R₃ + 5·R₁. The result is
\( \displaystyle \begin{split} \fbox{1}\,x_2 - \frac{1}{2}\,x_3 &= \phantom{-}\frac{5}{2} , \\ 3\, x_1 + \frac{7}{2}\, x_3 &= -\frac{1}{2}, \\ 2\, x_1 - \frac{11}{2}\, x_3 &= \frac{15}{2}. \end{split} \)    ⇒    \( \displaystyle \left[ \begin{array}{ccc|c} 0&\fbox{1} &-\frac{1}{2}& \phantom{-}\frac{5}{2} \\ 3&0&\frac{7}{2}&-\frac{1}{2} \\ 2&0&-\frac{11}{2}&\frac{15}{2} \end{array} \right] \)
With this operation, we converted the leading term in position (1, 2) into pivot because all entries below it are zeroes. We mark the pivot by putting "1" into box.

We multiply the second equation by 2 and the third equation by (−3). This yields R₂ ← 3·R₂, R₃ ← (-2)·R₃:

\( \displaystyle \begin{split} \fbox{1}\,x_2 - \frac{1}{2}\,x_3 &= \frac{5}{2} , \\ 6\, x_1 + 7\, x_3 &= -1, \\ -6\, x_1 + \frac{33}{2}\, x_3 &= -\frac{45}{2}. \end{split} \)    ⇒    \( \displaystyle \left[ \begin{array}{ccc|c} 0&\fbox{1} &-\frac{1}{2}& \frac{5}{2} \\ 6&0&7&-1 \\ -6&0&\frac{33}{2}&-\frac{45}{2} \end{array} \right] \)
Upon adding row 2 to row 3, denoted by R₃ ← R₃ + R₂, we get
\( \displaystyle \begin{split} \fbox{1}\,x_2 - \frac{1}{2}\,x_3 &= \frac{5}{2} , \\ \fbox{6}\, x_1 + 7\, x_3 &= -1, \\ \frac{47}{2}\, x_3 &= -\frac{47}{2}. \end{split} \)    ⇒    \( \displaystyle \left[ \begin{array}{ccc|c} 0&\fbox{1} &-\frac{1}{2}& \frac{5}{2} \\ \fbox{6}&0&7&-1 \\ 0&0&\frac{47}{2}&-\frac{47}{2} \end{array} \right] \)
Since the leading term (6) at position (2, 1) has all zeroes below it, we consider it as a pivot. It is convenient (for humans, not computers) to keep leading terms as zeroes, we divide the second row by 6 and the third row by 47/2; hence
\( \displaystyle \begin{split} \fbox{1}\,x_2 - \frac{1}{2}\,x_3 &= \frac{5}{2} , \\ \fbox{1}\, x_1 + \frac{7}{6}\, x_3 &= -\frac{1}{6}, \\ x_3 &= -1. \end{split} \)    ⇒    \( \displaystyle \left[ \begin{array}{ccc|c} 0&\fbox{1} &-\frac{1}{2}& \frac{5}{2} \\ \fbox{1}&0&\frac{7}{6}&-\frac{1}{6} \\ 0&0&\fbox{1}&-1 \end{array} \right] \)
End of Example 8
   Recall that two linear systems are said to be equivalent if they have the same solution set. The idea of equivalence for systems does not have an obvious analog for matrices. We extend this definition to augmented matrices by adopting a different approach.

Two rectangular matrices of the same dimensions are said to be row equivalent, denoted by the symbol \( \displaystyle \underset{R}{\sim} \) placed between the two matrices, if there exists a sequence of elementary row operations that transfers one matrix into another one.

Therefore, the elementary operations for systems of linear equations are naturally transferred on arbitrary rectangular matrices from 𝔽m×n and then on augmented matrices. The next result shows that the concept of row equivalence is a useful one.

Theorem 3: Let S₁ and S₂ be two systems of m equations in n unknowns with augmented coefficient matrices M₁ and M₂, respectively. If matrix M₁ is row equivalent to matrix M₂, then system S₁ is equivalent to system S₂.

Let S₁ and S₂ be two systems of m equations in n unknowns with augmented coefficient matrices M₁ and M₂, respectively. Suppose that M₁ and M₂ are row equivalent, and further suppose that there is a single elementary row operation which, when applied to M₁, produces M₂. Then, the corresponding elementary operation, when applied to S₁, will produce the system S₂. By Theorem 2, however, because S₂ is obtained from S₁ by a single elementary operation, the systems S₁ and S₂ are equivalent. So, if the sequence of operations that act on M₁ to produce M₂ is 1 operation long, then S₁ is equivalent to S₂.

Next suppose that M₁ and M₂ are row equivalent, and that the sequence of elementary row operations that produces M₂ from M₁ is 2 operations long. Let B₁ be the matrix obtained from M₁ by applying the first of the two elementary row operations, and let S₃ be the corresponding system of linear equations. Because the sequence of operations that act on M₁ to produce B₁ is 1 operation long, S₁ and S₃ have the same solution set by the previous argument. Furthermore, only one elementary row operation is required to produce C from B₁, so by the same argument S₃ and S₂ must have the same solution set. Hence, S₁ and S₂ have the same solution set, so S₁ and S₂ are equivalent. For a sequence of k elementary row operations, the same argument can be employed using k steps. Therefore, in general, if M₁ is row equivalent to M₂, then S₁ is equivalent to S₂.

Example 9: We consider a 4 × 5 system of linear equations \[ \begin{split} x_1 + 2\, x_2 + 3\, x_3 + 4\, x_4 + 5\, x_5 &= 15 , \\ 2\,x_1 + 4\,x_2 + 12\, x_3 + 5\,x_4 + 5\,x_5 &= 5 , \\ -3\, x_1 -6\,x_2 -5\, x_3 -13\,x_4 -18\,x_5 &= -64 , \\ 4\,x_1 + 8\,x_2 + 24\,x_3 -5\,x_4 + 5\,x_5 &= 45 . \end{split} \] The corresponding augmented matrix is \[ {\bf M} = \left[ \begin{array}{ccccc|c} 1 & 2 & 3 & 4 & 5 & 15 \\ 2 & 4 & 12 & 5 & 5 & 5 \\ -3 & -6 & -5 & -13 & -18 & -64 \\ 4& 8& 24& -5& 5& 45 \end{array} \right] . \] Performing elementary row operations, we transfer matrix M to the following equivalent matrix: \[ {\bf M} \,\underset{R}{\sim}\, \begin{bmatrix} \fbox{$\,1\,$} & 2 & 3 & 4 & 5 & 15 \\ 0&0&\fbox{$\, 6\, $} & -3& -5& -25 \\ 0& 0& 4& -1& -3& -19 \\ 0& 0& 12& -21& -15& -15 \end{bmatrix} \qquad \begin{split} R_2 \leftarrow \,R_2 -2\cdot R_1 \\ R_3 \leftarrow \, R_3 + 3\cdot R_1 \\ R_4 \leftarrow \, R_4 -4\cdot R_1 \end{split} \]
M = {{1, 2, 3, 4, 5, 15}, {2, 4, 12, 5, 5, 5}, {-3, -6, -5, -13, -18, -64}, {4, 8, 24, -5, 5, 45}};
M1 = {M[[1]], M[[2]] - 2*M[[1]], M[[3]] + 3*M[[1]], M[[4]] - 4*M[[1]]}
{{1, 2, 3, 4, 5, 15}, {0, 0, 6, -3, -5, -25}, {0, 0, 4, -1, -3, -19}, {0, 0, 12, -21, -15, -15}}
Next, we eliminate entries below the pivot "6". \[ {\bf M} \,\underset{R}{\sim}\, \begin{bmatrix} \fbox{$\,1\,$} & 2 & 3 & 4 & 5 & 15 \\ 0&0&\fbox{$\, 6\,$} & -3& -5& -25 \\ 0& 0& 0& \fbox{$\,1\,$} & 1/3 & -7/3 \\ 0& 0& 0& -15& -5& 35 \end{bmatrix} \qquad \begin{split} R_3 \leftarrow \, R_3 - \left( \frac{4}{6} \right) \cdot R_2 \\ R_4 \leftarrow \,R_4 -2\cdot R_2 \end{split} \]
M2 = {M1[[1]], M1[[2]], M1[[3]] - (2/3)*M1[[2]], M1[[4]] - 2*M1[[2]]}
{{1, 2, 3, 4, 5, 15}, {0, 0, 6, -3, -5, -25}, {0, 0, 0, 1, 1/ 3, -(7/3)}, {0, 0, 0, -15, -5, 35}}
In order to make entries nicer, we multiply the third row by 3 and divide the last row by 5. This yields \[ {\bf M} \,\underset{R}{\sim}\, \begin{bmatrix} \fbox{$\,1\,$} & 2 & 3 & 4 & 5 & 15 \\ 0&0&\fbox{$\, 6\,$} & -3& -5& -25 \\ 0& 0& 0& \fbox{$\,3\,$} & 1 & -7 \\ 0&0&0& -3 &-1&7 \end{bmatrix} \qquad \begin{split} R_3 \leftarrow \, 3\cdot R_3 \\ R_4 \leftarrow \, \left( \frac{1}{5} \right) \cdot R_4 \end{split} \] Finally, we add last two rows to obtain \[ {\bf M} \,\underset{R}{\sim}\, \begin{bmatrix} \fbox{$\,1\,$} & 2 & 3 & 4 & 5 & 15 \\ 0&0&\fbox{$\, 6\,$} & -3& -5& -25 \\ 0& 0& 0& \fbox{$\,3\,$} & 1 & -7 \\ 0&0&0& 0 &-0&0 \end{bmatrix} \qquad \begin{split} R_4 \leftarrow \, R_4 + R_3 \end{split} \] We convert this augmented matrix into equivalent system of equations: \[ \begin{split} x_1 + 2\, x_2 + 3\, x_3 + 4\, x_4 + 5\, x_5 &= 15 , \\ 0\,x_1 + 0\,x_2 + 6\, x_3 - 3\,x_4 - 5\,x_5 &= -25 , \\ 0\, x_1 -0\,x_2 -0\, x_3 +3\,x_4 + 1\,x_5 &= -7 , \\ 0\,x_1 + 0\,x_2 + 0\,x_3 -0\,x_4 + 0\,x_5 &= 0 . \end{split} \] Obviously, the last equation can be dropped because it does not impose any constraint on unknown variables, so we get \[ \begin{split} x_1 + 2\, x_2 + 3\, x_3 + 4\, x_4 + 5\, x_5 &= 15 , \\ 0\,x_1 + 0\,x_2 + 6\, x_3 - 3\,x_4 - 5\,x_5 &= -25 , \\ 0\, x_1 -0\,x_2 -0\, x_3 +3\,x_4 + 1\,x_5 &= -7 . \end{split} \] WE can make a conclusion based on the latter: the given system of equations has infinite many solutions with two free variables.
End of Example 9

Please note that the converse of Theorem 3 is false. There exist systems of equations S₁ and S₂ such that S₁ is equivalent to S₂, but the corresponding augmented coefficient matrices, M₁ and M₂, are not row equivalent. For an example of this phenomenon, let S₁ be the system

\[ \begin{cases} 1\,x + 2\,y &= 0 , \\ 0\,x + 0\,y &= 3 , \end{cases} \]
and let S₂ be the system
\[ \begin{cases} 1\,x + 3\,y &= 5 , \\ 0\,x + 0\,y &= 1 . \end{cases} \]
The solution set of each system is the empty set, so the two systems are equivalent. The corresponding augmented coefficient matrices \( \displaystyle \left[ \begin{array}{cc|c} 1&2&0 \\ 0&0&3 \end{array} \right] \quad \) and \( \displaystyle \quad \left[ \begin{array}{cc|c} 1&3&5 \\ 0&0&1 \end{array} \right] \quad \) are not row equivalent however.

Now rather than manipulating the equations, we can instead manipulate the rows of this augmented matrix. The main idea of row operations is to transfer the augmented matrix to another equivalent matrix that contains as many zeroes as possible.

Row operations can be performed with the aid of elementary matrices---this topic will be covered in Part 2.

Theorem 4: If a finite sequence of elementary row operations reduces a square matrix A to B, then the same sequence reduces the identity matrix I to a matrix P such that P A = B. Equivalently, if a sequence of elementary row operations reduces the augmented matrix [A | I] to [B | P], then P A = B.

In the following proof, we will use the formula \[ \mbox{Row}_i \left( {\bf A} \right) = \left( \mbox{Row}_i ({\bf I}) \right) {\bf A} . \] We must show that
  1. \( \displaystyle {\bf A}_{(R_i \,\updownarrow \,R_j )} = \left( {\bf I}_{(R_i \,\updownarrow \,R_j )} \right) {\bf A} . \)
  2. \( \displaystyle {\bf A}_{(R_k \leftarrow c\cdot R_k )} = \left( {\bf I}_{(R_k \leftarrow c\cdot R_k )} \right) {\bf A} . \)
  3. \( \displaystyle {\bf A}_{\left( R_i \leftarrow R_i + c\cdot R_j \right)} = \left( {\bf I}_{\left( R_i \leftarrow R_i + c\cdot R_j \right)} \right) {\bf A} . \)

I.   We show that the two rows in both matrices are the same, so these matrices are equal.

For the i-th row, we have \begin{align*} \mbox{Row}_i \left( \left( {\bf I}_{(R_i \,\updownarrow \,R_j )} \right) {\bf A} \right) &= \left( \mbox{Row}_i \left( {\bf I}_{\left( R_i \,\updownarrow \,R_j \right)} \right) \right) {\bf A} = \left( \mbox{Row}_j \left( {\bf I} \right) \right) {\bf A} \\ &= \mbox{Row}_j \left( {\bf A} \right) = \mbox{Row}_i \left( {\bf A}_{(R_i \,\updownarrow \,R_j )} \right) . \end{align*} Similarly, for j-th row, we have \begin{align*} \mbox{Row}_j \left( \left( {\bf I}_{(R_i \,\updownarrow \,R_j )} \right) {\bf A} \right) &= \left( \mbox{Row}_j \left( {\bf I}_{\left( R_i \,\updownarrow \,R_j \right)} \right) \right) {\bf A} = \left( \mbox{Row}_i \left( {\bf I} \right) \right) {\bf A} \\ &= \mbox{Row}_i \left( {\bf A} \right) = \mbox{Row}_j \left( {\bf A}_{(R_i \,\updownarrow \,R_j )} \right) . \end{align*} Thus, \( \displaystyle \left( {\bf I}_{(R_i \,\updownarrow \,R_j )} \right) {\bf A} \quad \) and \( \displaystyle \quad {\bf A}_{(R_i \,\updownarrow \,R_j )} \quad \) have equal rows and are equal.

II.   Proceeding as for type I operations, consider the jth row, for any ji \[ \mbox{Row}_j \left( {\bf I}_{c\cdot R_i} {\bf A} \right) = \left( \mbox{Row_j }\left( {\bf I}_{c\cdot R_i} \right) \right) {\bf A} = \left( \mbox{Row}_j \left( {\bf I} \right) \right) {\bf A} = \mbox{Row}_j ({\bf A}) =\mbox{Row}_j \left( {\bf A}_{c\cdot R_i} \right) . \] Also \[ \mbox{Row}_i \left( {\bf I}_{c\cdot R_i} {\bf A} \right) = \left( \mbox{Row}_i \left( {\bf I}_{c\cdot R_i} \right) \right) {\bf A} = \left( c \cdot \mbox{Row}_i \left( {\bf I} \right) \right) {\bf A} = c\cdot \mbox{Row}_i ({\bf A}) = \mbox{Row}_i \left( {\bf A}_{c\cdot R_i} \right) . \] Therefore, \( \displaystyle {\bf A}_{c \cdot R_i} \quad \) and \( \displaystyle \quad \left( {\bf I}_{c\cdot R_i} \right) {\bf A} \quad \) have equal rows and are equal.

III.   See section in Part 2 (Elementary matrices).

Example 10: Let us consider a 4-by-4 matrix \[ {\bf A} = \begin{bmatrix} 1& -1& 2& 3 \\ 2& -1& 0& 2 \\ 4& 1& -11& -1 \\ 1& 2& 3& 83 \end{bmatrix} . \]
A = {{1, -1, 2, 3}, {2, -1, 0, 2}, {4, 1, -11, -1}, {1, 2, 3, 83}};
We build an augmented matrix by appending the identity matrix \[ \left[ {\bf A} \mid {\bf I} \right] = \left[ \begin{array}{cccc|cccc} 1& -1& 2& 3&1&0&0&0 \\ 2& -1& 0& 2 & 0&1&0&0 \\ 4& 1& -11& -1 &0&0&1&0 \\ 1& 2& 3& 83 & 0&0&0&1 \end{array} \right] . . \]
aug = Join[A, IdentityMatrix[4], 2]
To eliminate the element in position (2, 1), we subtract the first row multiplied by 2 from the second row (R2 ← R2 − 2·R1)
(A1 = {aug[[1]], aug[[2]] - 2*aug[[1]], aug[[3]], aug[[4]]}) // MatrixForm
\( \displaystyle \left[ \begin{array}{cccc|cccc} 1& -1& 2& 3&1&0&0&0 \\ 0& 1& -4& -4 & -2&1&0&0 \\ 4& 1& -11& -1 &0&0&1&0 \\ 1& 2& 3& 83 & 0&0&0&1 \end{array} \right] \)
Next row operation (R3 ← R3 − 4·R1) gives
(A13 = {A1[[1]], A1[[2]], A1[[3]] - 4*A1[[1]], A1[[4]]}) // MatrixForm
\( \displaystyle \left[ \begin{array}{cccc|cccc} 1& -1& 2& 3&1&0&0&0 \\ 0& 1& -4& -4 & -2&1&0&0 \\ 0& 5& -19& -13 &-4&0&1&0 \\ 1& 2& 3& 83 & 0&0&0&1 \end{array} \right] \)
Next row operation (R4 ← R4 − R1) gives
(A14 = {A13[[1]], A13[[2]], A13[[3]], A13[[4]] - A13[[1]]}) // MatrixForm
\( \displaystyle \left[ \begin{array}{cccc|cccc} 1& -1& 2& 3&1&0&0&0 \\ 0& 1& -4& -4 & -2&1&0&0 \\ 0& 5& -19& -13 &-4&0&1&0 \\ 0& 3& 1& 80 & -1&0&0&1 \end{array} \right] \)
Now we simplify the second column with row operation: R3 ← R3 − 5·R2
(A23 = {A14[[1]], A14[[2]], A14[[3]] - 5*A14[[2]], A14[[4]]}) // MatrixForm
\( \displaystyle \left[ \begin{array}{cccc|cccc} 1& -1& 2& 3&1&0&0&0 \\ 0& 1& -4& -4 & -2&1&0&0 \\ 0& 0& 1& 7 &6&-5&1&0 \\ 0& 3& 1& 80 & -1&0&0&1 \end{array} \right] \)
Next, we perform R4 ← R4 − 3·R2
(A24 = {A23[[1]], A23[[2]], A23[[3]], A23[[4]] - 3*A23[[2]]}) // MatrixForm
\( \displaystyle \left[ \begin{array}{cccc|cccc} 1& -1& 2& 3&1&0&0&0 \\ 0& 1& -4& -4 & -2&1&0&0 \\ 0& 0& 1& 7 &6&-5&1&0 \\ 0& 0& 13& 82 & 5&-3&0&1 \end{array} \right] \)
The third column is simplified with the following row operation: R4 ← R4 − 13·R3
(A3 = {A24[[1]], A24[[2]], A24[[3]], A24[[4]] - 13*A24[[3]]}) // MatrixForm
Hence, we get \[ \left[ {\bf A} \mid {\bf I} \right] \,\underset{R}{\sim} \, \left[ \begin{array}{cccc|cccc} 1& -1& 2& 3&1&0&0&0 \\ 2& -1& 0& 2 & 0&1&0&0 \\ 4& 1& -11& -1 &0&0&1&0 \\ 1& 2& 3& 83 & 0&0&0&1 \end{array} \right] . \] Therefore, we obtain that matrix A is row equivalent to matrix B: \[ {\bf A} = \begin{bmatrix} 1& -1& 2& 3 \\ 2& -1& 0& 2 \\ 4& 1& -11& -1 \\ 1& 2& 3& 83 \end{bmatrix} \,\underset{R}{\sim} \, {\bf B} = \begin{bmatrix} 1& -1& 2& 3 \\ 0& 1& -4& -4 \\ 0& 0& 1& 7 \\ 0& 0& 0& 1 \end{bmatrix} . \] On another hand, the identity matrix is row equivalent to \[ {\bf I} \,\underset{R}{\sim} \, {\bf P} = \begin{bmatrix} 1& 0& 0& 0 \\ -2& 1& 0& 0 \\ 6& -5& 1& 0 \\ -73& 62& -13& 1 \end{bmatrix} . \] With Mathematica, we extract matrix P from A3
P = A3[[1 ;; 4, 5 ;; 8]]
and then multiply A from left
B = A3[[1 ;; 4, 1 ;; 4]];
P.A == B
True
So Mathematica confirms that P A = B.
End of Example 10

Corollary 1: A matrix A is row equivalent to matrix B if and only if there exists a non-singular matrix P such that P A = B.

Example 11: We consider the system of three equations with complex coefficients: \[ \begin{split} \left( 1 + {\bf j} \right) x_1 + \left( 2 - {\bf j} \right)^2 x_2 + 7\,x_3 &= -7{\bf j} , \\ \left( 1 + {\bf j} \right)^2 x_1 - \left( 1 + 2{\bf j} \right) x_2 + \left( 1 -2{\bf j} \right) x_3 &= -3, \\ 2{\bf j}\, x_1 + \left( 3 - 2 {\bf j} \right) x_2 + \left( 6 + 3 {\bf j} \right) x_3 &= -5{\bf j} , \end{split} \tag{10.1} \] where j is the imaginary unit on the complex plane ℂ so j² = −1. Mathematica uses a special symbol for the imaginary unit: \[ImaginaryJ]. First, we build the augmented matrix for this system of linear equations. \[ \left[ {\bf A} \mid {\bf b} \right] = \left[ \begin{array}{ccc|c} 1 + {\bf j} & \left( 2 - {\bf j} \right)^2 & 7 & -7 {\bf j} \\ \left( 1 + {\bf j} \right)^2 & - \left( 1 + 2{\bf j} \right) & 1 -2{\bf j} & -3 \\ 2{\bf j} & 3 - 2 {\bf j} & 6 + 3 {\bf j} & -5{\bf j} \end{array} \right] . \] Suppose we want to eliminate variable x₃. To achieve this, we multiply the first row by 1 −2j and subtract it from the second row multiplied by (−7). We abbreviate it as R₂ ← (−7)·R₂ + (1 −2j)·R₁.
M = {{(1 + \[ImaginaryJ]), (2 - \[ImaginaryJ])^2 , 7, -7*\[ImaginaryJ]}, {(1 + \[ImaginaryJ])^2 , -(1 + 2*\[ImaginaryJ]), 1 - 2*\[ImaginaryJ], -3}, {2*\[ImaginaryJ], 3 - 2*\[ImaginaryJ], 6 + 3*\[ImaginaryJ], -5*\[ImaginaryJ]}};
A1 = M;
temp = A1[[1]];
A1[[2]] = 7*A1[[2]] - A1[[1]]*(1 - 2*\[ImaginaryJ])
\( \displaystyle \begin{pmatrix} 1 + {\bf j} & \left( 2 - {\bf j} \right)^2 & 7 & -7 {\bf j} \\ 3 + 15{\bf j} & -2 - 4{\bf j} &0&-7+7{\bf j} \\ 2{\bf j} & 3 - 2 {\bf j} & 6 + 3 {\bf j} & -5{\bf j} \end{pmatrix} \)
Similarly, we eliminate x₃ from the third equation by subtracting the first row times (6 + 3j) from the third row times 7. This algorithm is written as R₃ ← 7·R₃ − (6 + 3j)·R₁.
A1[[3]] = A1[[3]]*7 - A1[[1]]*(6 + 3*\[ImaginaryJ])
{-3 + 5 I, -9 + I, 0, -21 + 7 I}
Hence, the augmented matrix [A | b] is equivalent to the following matrix \[ \left[ {\bf A} \mid {\bf b} \right] \,\underset{R}{\sim}\, \left[ \begin{array}{ccc|c} 1 + {\bf j} & \left( 2 - {\bf j} \right)^2 & 7 & -7 {\bf j} \\ -3 + 5{\bf j} & -2 - 4{\bf j} &0&-7+7{\bf j} \\ -3 + 15 {\bf j} & - 9 + {\bf j} & 0 & -21 + 7{\bf j} \end{array} \right] . \] Now we eliminate x₁ from the third equation by subtracting second row times (−3 + 15j) from the last equation times (3 + 5j). We abbreviate it as R₃ ← (3 + 5j)·R₃ − (−3 + 15j)·R₂.
A2 = A1;
A2[[3]] = (-3 + 15*\[ImaginaryJ])*A2[[3]] - (-3 + 5*\[ImaginaryJ])* A2[[2]]
{0, -14 - 140 I, 0, -28 - 280 I}
\[ \left[ {\bf A} \mid {\bf b} \right] \,\underset{R}{\sim}\, \left[ \begin{array}{ccc|c} 1 + {\bf j} & \left( 2 - {\bf j} \right)^2 & 7 & -7 {\bf j} \\ -3 + 5{\bf j} & -2 - 4{\bf j} &0& -7 + 7 {\bf j} \\ 0 & - 14 - 140{\bf j} & 0 & -28 - 280{\bf j} \end{array} \right] . \] Reading the last row of the augmented matrix, we discover the equation \[ \left( -14 - 140{\bf j} \right) x_2 = -28 - 280{\bf j} \qquad \Longrightarrow \qquad x_2 = 2 . \] · · ·
End of Example 11
Example 12: We consider the 2 × 3 system of linear equations with coefficients depending on a parameter: \[ \begin{split} 2\,x + \left( 2 +t \right) y + \left( t-4 \right) z &= 3 , \\ t\,x + \left( t-5 \right) y + 2t\,z &= 2 . \end{split} \] The corresponding augmented matrix also depends on parameter t \[ {\bf M} = \left[ {\bf A} \mid {\bf b} \right] = \left[ \begin{array}{ccc|c} 2 & 2+t & t-4 & 3 \\ t&t-5 & 2t & 2 \end{array} \right] . \] Of course, Mathematica can easily solve this system
Solve[{2*x + (2 + t)*y + (t - 4)*z == 3, t*x + (t - 5)*y + 2*t*z == 2}, {x, y}]
{{x -> -((-19 + t - 20 z + 13 t z + t^2 z)/(10 + t^2)), y -> -((4 - 3 t - 8 t z + t^2 z)/(10 + t^2))}}
\[ x = \frac{-19 + t - 20 z + 13 t z + t^2 z}{10 + t^2} , \qquad y = \frac{4 - 3 t - 8 t z + t^2 z}{10 + t^2} . \] However, our objective is to demonstrate the elimination procedure. First, we eliminate x from the second equation using elementary operation: R2 ← 2·R2 − t·R1.
u = {2, 2 + t, t - 4, 3}; v = {t, t - 5, 2*t, 2};
t*u - v*2
{0, -2 (-5 + t) + t (2 + t), -4 t + (-4 + t) t, -4 + 3 t}
\[ {\bf M} \,\underset{R}{\sim}\, \left[ \begin{array}{ccc|c} 2 & 2+t & t-4 & 3 \\ 0&10 + t^2 & t\left( t - 8 \right) & 3t-4 \end{array} \right] . \] Then we multiply the first row by 10 + t² and subtract the second ome multiplied by 2 + t, so R1 ← (10 + t²)·R1 + (2 + t)·R2:
u = {2, 2 + t, t - 4, 3}; v = {0, t^2 + 10, t (t - 8), 3*t - 4};
(t^2 + 10)*u - v*(2 + t)
{2 (10 + t^2), 0, -((-8 + t) t (2 + t)) + (-4 + t) (10 + t^2), -((2 + t) (-4 + 3 t)) + 3 (10 + t^2)}
This leads to \[ {\bf M} \,\underset{R}{\sim}\, \left[ \begin{array}{ccc|c} 2\left( 10 + t^2 \right) & 0 & 2 \left( t^2 + 13 t -20 \right) & 38 -2t \\ 0&10 + t^2 & t\left( t - 8 \right) & 3t-4 \end{array} \right] . \] Upon division by 2, the first equation reads as \[ \left( 10 + t^2 \right) x = 19 - t - z \left( t^2 + 13 t -20 \right) . \] From last row, we get \[ \left( 10 + t^2 \right) y = 3t-4 - t \left(t-8 \right) . \]
End of Example 12
Example 13: Suppose we need to determine whether the following three matrices are linearly independent: \[ {\bf A}_1 = \begin{bmatrix} 1&2 \\ 3&4 \end{bmatrix} , \quad {\bf A}_2 = \begin{bmatrix} \phantom{-}1&\phantom{-}0 \\ -2&-1 \end{bmatrix} , \quad {\bf A}_3 = \begin{bmatrix} \phantom{-}0&3 \\ -2&1 \end{bmatrix} . \] These matrices are linearly dependent when the equation \[ x_1 {\bf A}_1 + x_2 {\bf A}_2 + x_3 {\bf A}_3 = {\bf 0}_{2\times 2} \tag{12.1} \] has at least one nontrivial solution.

In order for the numbers x₁, x₂, and x₃ to satisfy the equation (12.1), we must have \[ \begin{split} x_1 + x_2 +0\, x_3 &= 0 , \\ 2\,x_1 + 0\, x_2 + 3\, x_3 &= 0, \\ 3\, x_1 -2\, x_2 - 2\, x_3 &= 0, \\ 4\, x_1 - x_2 + x_3 &= 0 . \end{split} \tag{12.2} \] Although we can ask Mathematica for help

A = {{1, 2, 0}, {2, 0, 3}, {3, -2, -2}, {4, -1, 1}};
b = {0, 0, 0, 0};
LinearSolve[A, b]
{0, 0, 0}
we employ our technique with row operations, So we build the augmented matrix \[ {\bf M} = \left[ {\bf A} \mid {\bf b} \right] = \left[ \begin{array}{ccc|c} 1&2&0&0 \\ 2&0&3&0 \\ 3&-2&-2&0 \\ 4&-1&1&0 \end{array} \right] . \] Performing row operations R₂ ← (−2)·R₁ + R₂, R₃ ← (−3)·R₁ + R₃, and R₄ ← (−4)·R₁ + R₄, we obtain
M = {{1, 2, 0, 0}, {2, 0, 3, 0}, {3, -2, -2, 0}, {4, -1, 1, 0}};
m2 = M[[2]] - 2*M[[1]];
m3 = M[[3]] - 3*M[[1]];
m4 = M[[4]] - 4*M[[1]];
M1 = {M[[1]], m2, m3, m4}
\( \displaystyle \begin{pmatrix} 1&2&0&0 \\ 0&-4&3&0 \\ 0&-8& -2& 0 \\ 0& -9&1&0 \end{pmatrix} \)
Next, we do similar row operations: R₃ ← (−2)·R₂ + R₃, and R₄ ← (−9/4)·R₂ + R₄. This yields
m32 = M1[[3]] - 2*M1[[2]];
m43 = M1[[4]] - (9/4)*M1[[2]];
M2 = {M1[[1]], M1[[2]], m32, m43}
\( \displaystyle \begin{pmatrix} 1&2&0&0 \\ 0&-4&3&0 \\ 0&0& -8& 0 \\ 0& 0&-\frac{23}{4}&0 \end{pmatrix} \)
The last equation reads \[ -\frac{23}{4}\, x_3 = 0 \qquad \Longrightarrow \qquad x_3 = 0 . \] Hence, other unknown values are also zeroes, and we conclude that our three matrices are linearly independent.
End of Example 13
These observations form the motivation behind a method to solve systems of linear equations, known as Gaussian elimination, called after its inventor Carl Friedrich Gauss (1777--1855) from Germany.

 

 

  1. Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International
  2. Beezer, R., A First Course in Linear Algebra, 2015.
  3. Beezer, R., A Second Course in Linear Algebra, 2013.