es

The Gaussian elimination method is one of the most important and ubiquitous algorithms that can help deduce important information about the given system of linear equations based on the augmented matrix:

  1. If there is a solution, the method will find it.
  2. If there is more than one solution, the method will find a general formula for all of them.
  3. If there are no solutions, the method will let us know.
As such, it is one of the most useful numerical algorithms and plays a fundamental role in scientific computation. As a practical matter, we need to know little about how to solve a linear system, as someone else has figured it out and every computer solver provides us its solution. However, we will be able to adapt the method to solve related Linear Algebra problems---this will requre understanding in depth the fundamental properties of linear systems. Also, these solvers fail for certain common classes of very large systems of equations, and you need to know enough about linear algebra to diagnose such situations and to propose other methods that do work in such instances.

This method has been historically around and used by ancient Babylonian and Chinese mathematicians since 179 CE. It was first described in the Nine Chapters of Mathematical Art. The author is known as Fang Cheng Shu, See the historical notes.

Many people have contributed to Gaussian elimination, including Isaac Newton. However, it was named by the American mathematician George Forsythe (1917--1972) in honor of the German mathematician and physicist Carl Friedrich Gauss (1777--1855). Gauss’s genius was in systematizing and extending the approach for modern science. The full history of Gaussian algorithm can be found within the Joseph F. Grcar article. The Gaussian elimination section is spitted into a few subsections:

Preliminaries

Motivation

REF

RREF

Counting Operations


Gaussian elimination

Carl Friedrich Gauss (1777--1855) developed his method for solving systems of linear equations around the year 1801, during his groundbreaking work in astronomy. The same year, the Sicilian astronomer Giuseppe Piazzi discovered the dwarf planet Ceres, but lost track of it after 40 days of observation since January 1. Gauss, then just 24 years old, stunned the scientific world by accurately predicting Ceres’s orbit using only limited data. To do this, he had to solve a system of 17 linear equations. His prediction was so accurate that Franz Xaver von Zach and Heinrich Olbers were able to recover Ceres in December 1801, nearly a year after its initial discovery.

The Gauss discovery emphasizes that many brilliant minds operate outside traditional academic or professional structures. Gauss’s example shows that what matters most is curiosity, rigor, and courage to pursue deep questions, even when recognition or resources are scarce. At that time, he was living in Brunswick (the German city of Braunschweig), supported by a stipend from the Duke of Brunswick, who recognized Gauss’s genius and funded his studies and research. Only in 1807, Gauss was appointed director of the Göttingen Observatory, a prestigious position he held for the rest of his life.

Gauss didn’t immediately publish his method. He revealed it eight years later, in 1809, in his book: Theoria Motus Corporum Coelestium (“Theory of the Motion of Heavenly Bodies”). Now time comes to explain why his method is called "elimination". To clarify the situation, we consider the following example of the system of two equations:

\[ \begin{cases} \phantom{-}3\,x - 7\,y &= 8 , \\ -3\,x + 2\,y &= 7. \end{cases} \]
It’s actually not easy to solve this system in your head. However, adding the first equation to the second one yields
\[ \begin{cases} 3\,x - 7\,y &= 8 , \\ \phantom{3\,x } -5\,y &= 15, \end{cases} \qquad \iff \qquad \begin{cases} 3\,x - 7\,y &= 8 , \\ 0\,x -5\,y &= 15 . \end{cases} \]
Suddenly, this new second equation is much easier to solve. The reason is almost obvious---the second equation contains zero multiple of unknown variable x. Its solution follows immediately that y = −3, and then you can plug that into the first equation to get x = 7 + 8/3 = 9⅔. Thus, the solution is now easier to calculate because the second equation has less active variables that makes y decoupled from x. This is the principle of row reduction and Gaussian elimination by converting the given system into equivalent system containing zero multiples of some variables.

Now we apply the Gauss method to a linear system

\begin{equation} \label{EqGauss.1} \begin{cases} a_{1,1} x_1 + a_{1,2} x_2 + \cdots + a_{1,n} x_n &= b_1 , \\ a_{2,1} x_1 + a_{2,2} x_2 + \cdots + a_{2,n} x_n &= b_2 , \\ \cdots \qquad \cdots \\ a_{m,1} x_1 + a_{m,2} x_2 + \cdots + a_{m,n} x_n &= b_m , \end{cases} \end{equation}
where the bis and 𝑎i,js are known scalars, (𝑎i,1, 𝑎i,2, … , 𝑎i,n) is a nonzero vector in 𝔽n for each i ∈ [1..m] = {1, 2, … , m} and x₁, x₂, … , xn are unknowns from field 𝔽 that you want to solve for.

Treating the general system of equations is based on some transformations of these equations in such a way that the resulting equations have unknown variables with zero multiples. Such variables can be eliminated from the equations because they do not effect the solution. In real life, a bank account with zero funds is considered to be useless and it is usually closed or eliminated from the data.

The Gauss method uses only those operations that preserve the solution set of the system, known as elementary row operations:

  1. Addition of a multiple of one equation to another.
    Symbolically: (equation j) \( \mapsto \) (equation j) + k (equation i).
  2. Multiplication of an equation by a nonzero constant k.
    Symbolically: (equation j) \( \mapsto \) k (equation j).
  3. Interchange of two equations>
    Symbolically: (equation j) \( \Longleftrightarrow \) (equation i).
It is essential to associate with system \eqref{EqGauss.1} two matrices: the coefficient matrix and the column vector formed from m-tuple of constant term:
\[ {\bf A} = \begin{bmatrix} a_{1,1}& a_{1,2} & a_{1,3} & \cdots & a_{1,n} \\ a_{2,1}& a_{2,2} & a_{2,3} & \cdots & a_{2,n} \\ \vdots& \vdots& \vdots& \ddots & \vdots \\ a_{m,1}& a_{m,2} & a_{m,3} & \cdots & a_{m,n} \end{bmatrix} \qquad \mbox{and} \qquad {\bf b} = \begin{pmatrix} b_1 \\ b_2 \\ \vdots \\ b_m \end{pmatrix} . \]

Instead of long-written system of equations \eqref{EqGauss.1}, we will use the succinct notation A x = b, which is a shortcut of saying that matrix A acts on column vector x with output b. This is not just about saving space and ink—converting a system of many equations into one matrix equation leads to new and efficient ways of solving those equations. Later in Chapter 2, you will learn that this notation makes sense.

These two matrices, A ∈ 𝔽m×n and b ∈ 𝔽m×1, is convenient to unite into a single matrix, known as the augmented matrix. An augmented matrix combines two matrices with the same number of rows into a single matrix, typically by appending the column vector of the constant term b = (b₁, b₂, … , bm) ∈ 𝔽m, written as a column vector, to the coefficient matrix A. Then the augmented matrix becomes

\[ \left[ {\bf A} \vert {\bf b} \right] = \left[ \begin{array}{ccccc|c} a_{11}& a_{12} & a_{13} & \cdots & a_{1n} & b_1 \\ a_{21}& a_{22} & a_{23} & \cdots & a_{2n} & b_2 \\ \vdots& \vdots& \vdots& \ddots & \vdots & \vdots \\ a_{m1}& a_{m2} & a_{m3} & \cdots & a_{mn} & b_n \end{array} \right] . \]
Sometimes a vertical line is used to separate the coefficient entries from the constants can be dropped and then the augmented matrix is written as [A b] .

Two phases of Gauss's method

The Gaussian elimination method is basically a series of operations carried out on a given matrix, in order to mathematically simplify it to its echelon form---which is almost upper triangular matrix.

Application of Gaussian elimination to linear system A x = b consists of two steps: forward elimination (also frequently called Gaussian elimination procedure or REF) to reduce the augmented matrix to upper triangular form, and back substitution. Therefore, solving a linear system of algebraic equations using this elimination procedures can also be called forward elimination and back substitution that are abbreviated as FEBS in computational linear algebra.

0
p1
0
p2
0
p3
0
p4
0
0
....
pr
rank r

In this section, we will look at a general, systematic procedure for solving a system of linear equations. This procedure starts with construction of the corresponding augmented matrix and then using elementary row operations (that we studied in the previous section) transfer it to a special form that allows us retrieve the solution. The method is direct in the sense that it leads directly to the solution (if one exists) in a finite number of steps.

This method is known as Gaussian elimination algorithm. It is fast and safe: Gauss’s method never loses solutions nor does it ever pick up extraneous solutions, so that a tuple is a solution to the system before we apply the method if and only if it is a solution after.

In each row of a system, the first variable with a nonzero coefficient is the row’s leading variable. A system is in echelon form if each leading variable is to the right of the leading variable in the row above it, except for the leading variable in the first row, and any rows with all-zero coefficients are at the bottom.
   
Example 1: The following linear system requires the swap operation to get it into echelon form \[ \begin{split} x - y \phantom{4+z + 2w} &= 0 , \\ 3x - 3y + z + 2w &= 6 , \\ \phantom{x-} y \phantom{4+y} +w &= 0 , \\ \phantom{x-y} 3z + w &= 7 , \end{split} \qquad \mbox{or} \qquad \left[ \begin{array}{cccc|c} \fbox{1}&-1&0&0&0 \\ 3&-3&1&2&6 \\ 0&1&0&1&0 \\ 0&0&3&1&7\end{array} \right] . \] Indeed, the first two rows violate the requirement of row echelon form. So upon adding scaling first row to the second row, we get \[ \begin{split} x - y \phantom{4+z + 2w} &= 0 , \\ \phantom{3x - 3y +} z + 2w &= 6 , \\ \phantom{x-} y \phantom{4+y} +w &= 0 , \\ \phantom{x-y} 3z + w &= 7 , \end{split} \qquad \left[ \begin{array}{cccc|c} \fbox{1}&-1&0&0&0 \\ 0&0&1&2&6 \\ 0&1&0&1&0 \\ 0&0&3&1&7\end{array} \right] . \] This is again not an echelon form---the second equation has no leading y. We exchange it for a lower-down row that has a leading y. \[ \begin{split} x - y \phantom{4+z + 2w} &= 0 , \\ \phantom{x-} y \phantom{4+y} +w &= 0 , \\ \phantom{3x - 3y +} z + 2w &= 6 , \\ \phantom{x-y} 3z + w &= 7 , \end{split} \qquad \left[ \begin{array}{cccc|c} \fbox{1}&-1&0&0&0 \\ 0&\fbox{1}&0&1&0 \\ 0&0&\fbox{1}&2&6 \\ 0&0&3&1&7\end{array} \right] . \] Finally, we add the third row scaled by (−3) to the last one and obtain \[ \begin{split} x - y \phantom{4+z + 2w} &= 0 , \\ \phantom{x-} y \phantom{4+y} +w &= 0 , \\ \phantom{3x - 3y +} z + 2w &= 6 , \\ \phantom{x-y+3x} -5 w &= -11 , \end{split} \qquad \left[ \begin{array}{cccc|c} \fbox{1}&-1&0&0&0 \\ 0&\fbox{1}&0&1&0 \\ 0&0&\fbox{1}&2&6 \\ 0&0&0&\fbox{-5}&-11\end{array} \right] . \] This system is in echelon form. Strictly speaking, to solve linear systems we don’t need the row rescaling operation. However, we perform such scaling and obtain \[ \begin{split} x - y \phantom{4+z + 2w} &= 0 , \\ \phantom{x-} y \phantom{4+y} +w &= 0 , \\ \phantom{3x - 3y +} z + 2w &= 6 , \\ \phantom{x-y+3x} w &= 11/5 , \end{split} \qquad \left[ \begin{array}{cccc|c} \fbox{1}&-1&0&0&0 \\ 0&\fbox{1}&0&1&0 \\ 0&0&\fbox{1}&2&6 \\ 0&0&0&\fbox{1}&11/5\end{array} \right] . \] By scaling, we make all pivots ones (put all them into boxes) because it is convenient and because we will use it later in the following subsection as part of a variation of Gauss’s method, the Gauss-Jordan method (or RREF).    ■
End of Example 1
   
A rectangular matrix is said to be in echelon form or row echelon form if it has the following three properties:
  1. All nonzero rows are above any rows of all zeroes.
  2. Each leading entry (first nonzero number from the left), called the pivot, of a row is in a column to the right of the leading entry of the row above it.
  3. All entries in a column below a leading entry are zeroes.
An echelon matrix is one that is in echelon form.
Row operations can be applied to any matrix, not merely to one that arises as the augmented matrix of a linear system. Two matrices are called row equivalent if there is a sequence of elementary row operations that transforms one matrix into the other. It is important to note that row operations are reversible. if two rows are interchanged, they can be returned to their original positions by another interchange. If a row is scaled by a nonzero constant k, then multiplying the new row by 1/k produces the original row. Finally, consider a replacement operation involving two rows---say 1 and 3---and suppose that k times row 1 is added to row 3 to obtain a new row 3. To come back, just add -k times row 1 to new row 3 and you will get the original row 3.    
Example 2: The following matrices are in the echelon form:
\[ \begin{bmatrix} \color{red}{♦} & \ast & \ast & \ast \\ 0& \color{red}{♦} & \ast & \ast \\ 0&0&0&0 \\ 0&0&0&0 \end{bmatrix} , \quad \begin{bmatrix}\color{red}{ ♦} & \ast & \ast & \ast \\ 0&0 & \color{red}{♦} & \ast \\ 0&0&0& \color{red}{⊚} \\ 0&0&0&0 \end{bmatrix} , \quad \begin{bmatrix} \color{red}{♦} & \ast & \ast & \ast & \ast & \ast \\ 0&0&0 & \color{red}{♦} & \ast & \ast \\ 0&0&0&0& \color{red}{♦} & \ast \\ 0&0&0&0&0& \color{red}{⊚} \end{bmatrix} . \]
where denotes the pivot's position (the entry cannot be zero), * denotes arbitrary element that could be zero or not, and denotes lonely nonzero entry that looks as a pivot but it indicates that the corresponding system has no solution. From theoretical point of view, a pivot could be in the last column, but when dealing with augmented matrices corresponding to the linear system of equations, we avoid this. For consistent systems, pivots cannot be in lonely position at the end of a row; otherwise, the system has no solution.    ■
End of Example 2
A pivot position in a matrix A is a location in A that corresponds to a leading term in the row echelon form of A. A pivot column is a column of A that contains a pivot position.

Any nonzero matrix may be row reduced (that is, transformed by elementary row operations) into more than one matrix in echelon form, using different sequences of row operations. However, the leading entries are always in the same positions in any echelon form obtained from a given matrix.

Theorem 1: A linear system has a solution if and only if the rightmost column of the associated augmented matrix is not a pivot column---that is, if and only if its echelon form does not contain a row of the form
[ 0 0 ... 0 ] with nonzero.
If a linear system is consistent, then the solution set contains either a unique solution (with no free variables) or infinitely many solutions, when there is at least one free variable.

Row echelon form states that the Gaussian elimination method has been specifically applied to the rows of the matrix.

The number of pivots in a matrix is called its rank. The number of non-pivot columns (free variables) is called nullity.
   
Example 3: We consider the system of equation \[ \begin{split} 3x -y + 5z 4w &= 20, \\ 17x + 8y -5z 8w &= 37 , \\ 2x + 4y -3z + w &= -1, \\ -5x + 2y + 6z + 3w &= 0 , \end{split} \] having the coefficient matrix and constant term (written as column vector), \[ \mathbf{A} = \begin{bmatrix} 3& -1& 5& 4 \\ 17& 8& -5& 8 \\ 2& 4& -3& 1 \\ -5& 2& 6& 3 \end{bmatrix} , \qquad \mathbf{b} = \begin{pmatrix} 20\\ 37\\ -1\\ 0 \end{pmatrix} . \] Using Mathematica, we find the rank and nullity of matrix A to be 3 and 1, respectively.
A = { {3, -1, 5, 4}, {17, 8, -5, 8}, {2, 4, -3, 1}, {-5, 2, 6, 3}}; b = {20, 37, -1, 0}; MatrixRank[A]
3
A = { {3, -1, 5, 4}, {17, 8, -5, 8}, {2, 4, -3, 1}, {-5, 2, 6, 3}}; b = {20, 37, -1, 0}; nullity = Dimensions[NullSpace[A]]
{1, 4}
We can also use standard Mathematica command to find RREF of matrix A:
A = { {3, -1, 5, 4}, {17, 8, -5, 8}, {2, 4, -3, 1}, {-5, 2, 6, 3}}; RowReduce[A]
{{1, 0, 0, 85/207}, {0, 1, 0, 112/207}, {0, 0, 1, 137/207}, {0, 0, 0, 0}}
\[ \mbox{RREF}(\mathbf{A}) = \begin{bmatrix} \fbox{1}& 0& 0& 85/207 \\ 0& \fbox{1}& 0& 112/207 \\ 0& 0& \fbox{1}& 137/207 \\ 0& 0& 0& 0 \end{bmatrix} . \]

Let us consider another matrix \[ \mathbf{B} = \begin{bmatrix} 3& -1& 5& 4 \\ 17& 8& -5& 8 \\ 2& 4& -3& 1 \\ -5& 2& 6& 3 \\ 98& 95& -83& -46 \end{bmatrix} . \]

B = { {3, -1, 5, 4}, {17, 8, -5, 8}, {2, 4, -3, 1}, {-5, 2, 6, 3}, {98, 95, -83, -46}}; MatrixRank[B]
4
B = { {3, -1, 5, 4}, {17, 8, -5, 8}, {2, 4, -3, 1}, {-5, 2, 6, 3}, {98, 95, -83, -46}}; Dimensions[NullSpace[B]]
{0}
To check the answer, we use Mathematica again. First we calculate its row echelon form using a subroutine:
RowEchelonForm[matrix_] := Module[ {mat = matrix, rows, cols, pivotRow = 1, pivotCol = 1, factor}, {rows, cols} = Dimensions[mat]; While[pivotRow <= rows && pivotCol <=cols, (* Find pivot *) If[mat[[pivotRow, pivotCol]]==0, (* Try to swap with a lower row *) Do[ If[mat[[r, pivotCol]] !=0, mat[[{pivotRow, r}]]=mat[[{r, pivotRow}]]; Break[]; ], {r, pivotRow + 1, rows} ]; ]; (* If pivot is still zero, move to next column *) If[mat[[pivotRow, pivotCol]]==0, pivotCol++; Continue[]; ]; (* Eliminate below *) Do[ factor=mat[[r, pivotCol]] / mat[[pivotRow, pivotCol]]; mat[[r]]=mat[[r]] - factor * mat[[pivotRow]], {r, pivotRow + 1, rows} ]; pivotRow++; pivotCol++; ]; mat ]
Then we apply this subroutine to our matrix B:
RowEchelonForm[B]
{{3, -1, 5, 4}, {0, 41/3, -(100/3), -(44/3)}, {0, 0, 207/41, 137/ 41}, {0, 0, 0, -(5707/69)}, {0, 0, 0, 0}}
\[ \mbox{REF}(\mathbf{B}) = \begin{bmatrix} \fbox{3}& -1& 5& 4 \\ 0& \fbox{41/3}& -(100/3)& -(44/3) \\ 0& 0& \fbox{207/41}& 137/41 \\ 0& 0& 0& \fbox{-(5707/69)} \\ 0& 0& 0& 0 \end{bmatrix} . \] However, let us consider another matrix \[ \mathbf{C} = \begin{bmatrix} 3& -1& 5& 4 \\ 17& 8& -5& 8 \\ 2& 4& -3& 1 \\ -5& 2& 6& 3 \\ 5& 12& -25& -8 \end{bmatrix} . \]
cc = { {3, -1, 5, 4}, {17, 8, -5, 8}, {2, 4, -3, 1}, {-5, 2, 6, 3}, {5, 12, -25, -8}}; RowEchelonForm[cc]
{{3, -1, 5, 4}, {0, 41/3, -(100/3), -(44/3)}, {0, 0, 207/41, 137/ 41}, {0, 0, 0, 0}, {0, 0, 0, 0}}
Its row reduced form (REF) is \[ \mbox{REF}(\mathbf{C}) = \begin{bmatrix} \fbox{3}& -1& 5& 4 \\ 0& \fbox{41/3}& -(100/3)& -(44/3) \\ 0& 0& \fbox{207/41}& 137/41 \\ 0& 0& 0& 0 \\ 0& 0& 0& 0 \end{bmatrix} . \] Now we find its rank and nullity.
cc = { {3, -1, 5, 4}, {17, 8, -5, 8}, {2, 4, -3, 1}, {-5, 2, 6, 3}, {5, 12, -25, -8}}; MatrixRank[cc]
3
cc = { {3, -1, 5, 4}, {17, 8, -5, 8}, {2, 4, -3, 1}, {-5, 2, 6, 3}, {5, 12, -25, -8}}; Dimensions[NullSpace[cc]]
{1, 4}
So the rank of this matrix is 3 and the number of free variables (nullity) is 1.    ■
End of Example 3

Row echelon and Reduced row echelon forms are the resulting matrices of the Gaussian method of forward elimination and backward substitution stages. By applying Gaussian elimination to any matrix, one can deduce the following information about the given matrix:

  • the rank of the matrix;
  • the determinant of the matrix;
  • the inverse (invertible square matrices only);
  • the kernel vector (also known as ‘null vector’).
The main purpose of writing a matrix in its row echelon form and/or reduced row echelon, is to make it easier to examine the matrix and to carry out further calculations, especially when solving a system of algebraic equations.

When forward elimination procedure is applied to a system of algebraic equations A x = b, the first step is to create an augmented matrix, which is obtained by appending the column vector b from right to the coefficient matrix A of the system, M = [A|b]. The next step is to use elementary row operations to reduce the augmented matrix M to the equivalent matrix C = [U|c], where U is an upper triangular matrix, which must be in the row echelon form (REF). This means that the new system U x = c is relatively easy to solve.
Variables in a linear system of equations that corresponds to pivot positions in the augmented matrix for the given system are called the leading variables; the remaining variables are called free variables.
   
Example 10: We consider the following augmented matrix:
\[ {\bf A} = \begin{bmatrix} \color{red}{-1}&3&-2&5&\phantom{-}4&0&7 \\ 0&0& \phantom{-}\color{red}{3}&6&-1&0&2 \\ 0&0&\phantom{-}0&0&\phantom{-}0&\color{red}{7}&49 \\ 0&0&\phantom{-}0&0&\phantom{-}0&0&0 \end{bmatrix} . \]
This matrix corresponds to four equations in six unknown variables. Since pivots are located in columns 1, 3, and 6, the leading variables for this system of equations are x1, x3, and x6. The other three variables x2, x4, and x5 are free variables.    ■
End of Example 10

Theorem 2: If a homogeneous linear system A x = 0 has n unknowns, and if the row echelon form of its augmented matrix has r nonzero rows, then the system has n-r free variables.

We remind the following statement from the previous section.

Theorem 3: Let M = [A|b] be the augmented matrix corresponding to the linear equation A x = b, and suppose M is row equivalent (using a finite sequence of elementary row operations) to the equivalent matrix C = [U|c], which corresponds to the linear system U x = c. Then the two linear systems have precisely the same solution set.


Naive Gaussian elimination algorithm for Ax=b with a square matrix A (pseudocode)
 
for i=1 to n-1 
	for j=i+1 to n
		m=a(j,i)/a(i,i)
		for k=i+1 to n 
			a(j,k) = a(j,k) -m*a(i,k)
		endfor
		b(j) = b(j) - m*b(i)
	endfor
endfor
The outmost loop (the i loop) ranges over the columns of the matrix; the last column is skipped because we do not need to perform any eliminations there because there are no elements below the diagonal. (if we were doing elimination on a nonsquare matrix with more rows than columns, we would have to include the last column in this loop.)

The middle loop (the j loop) ranges down the i-th column, below the diagonal (hence j ranges only from i+1 to n---the dimension of the matrix A). We first compute the multiplier m, for each row. This is the constant by which we multiply the i-th row in order to eliminate the aji element. Note that we overwrite the previous values with the new ones, and we do not actually carry out computation that makes aji zero. Also this loop is where the right-side vector is modified to reflect the elimination step.

The innermost loop (the k loop) ranges across the j-th row, starting after the i-th column, modifying each element appropriately to reflect the elimination of aji.

Finally, we must be aware that the algorithm does not actually create the zeroes in the lower triangular half of B; this would be wasteful of computer time since we don't need to have these zeroes in place for the algorithm to work. The algorithm works because we utilize only the upper triangular part of the matrix from this point forward, so the lower triangular elements need never be referenced.

This algorithm requires \( \frac{n}{3} \left( n^2 -1 \right) \) floating point multiplications and divisions for operations on the coefficient matrix and \( \frac{n}{2} \left( n -1 \right) \) multiplications for operations on the right-hand terms, where after the triangular set has to be solved with \( \frac{n}{2} \left( n +1 \right) \) operations.

Backward solution algorithm for A (pseudocode)

 
x(n) = b(n)/a*n,n)
for i=n-1 to 1 
	sum = 0
	for j=i+1 to n
		sum = sum + a(i,j)*x(j)
	endfor
	x(i) = (b(i) - sum)/a(i,i) 
endfor
This algorithm simply matches backward up the diagonal, computing each xi in turn. Finally, we are computing
\[ x_i = \frac{1}{a_{ii}} \left( b_i - \sum_{j=i+1}^n a_{ij} x_j \right) , \]
which is what is necessary to solve a triangular system. The j loop is simply accumulating the summation term in this formula. The algorithm stops if one of the diagonal terms is zero because we cannot divide by it. This case requires a special attention that yields interchange of row.
Example 11: Consider system of algebraic equations
\[ \begin{split} 3\,x-3\,y-6\,z &=3, \\ 2\,x + 4\,y + z &= 1 ,\\ -x + y + 2\,z &= -1. \end{split} \]
The idea of Gaussian elimination is to replace the above system by another system with the same solutions but which is easier to solve. First, we build the augmented matrix corresponding to the given system of equations:
\[ \left[ {\bf A}\,{\bf b}\right] = \begin{bmatrix} \phantom{-}3&-3&-6&\phantom{-}3 \\ \phantom{-}2&\phantom{-}4& \phantom{-}1 & \phantom{-}1 \\ -1 &\phantom{-}1&\phantom{-}2&-1 \end{bmatrix} . \]
Multiplying the first row by -2/3 and adding to the second one, we obtain an equivalent augmented matrix:
\[ \left[ {\bf A}\,{\bf b}\right] \sim \begin{bmatrix} \phantom{-}3&-3&-6&\phantom{-}3 \\ \phantom{-}0&\phantom{-}6& \phantom{-}4&-1 \\ -1 &\phantom{-}1&\phantom{-}2&-1 \end{bmatrix} . \]
Next, we multiply the first row by 1/3 and add to the third row:
\[ \left[ {\bf A}\,{\bf b}\right] \sim \begin{bmatrix} 3&-3&-6&\phantom{-}3 \\ 0&\phantom{-}6&\phantom{-}4&-1 \\ 0 &\phantom{-}0&\phantom{-}0&\phantom{-}0 \end{bmatrix} . \]
Therefore, we obtain an equivalent augmented matrix in row echelon form. Since it contains one row of all zeroes, the given system has infinite many solutions that we obtain by solving the second equation:
\[ 6\,y + 4\,z = -1 \qquad \Longrightarrow \qquad z = - \frac{3}{2}\, y - \frac{1}{4} . \]
Using this expression, we get from the first equation
\[ 3\,x-3\,y-6\,z = 3x + 6\,y + \frac{3}{2} = 3 \qquad \Longrightarrow \qquad x = -2\,y + \frac{3}{2} . \]
So y is a free variable, and we obtain the general solution:
\[ x = -2\,y + \frac{3}{2} , \quad z = - \frac{3}{2}\, y - \frac{1}{4} . \]
End of Example 11

Example 12: Consider system of algebraic equations
\[ \begin{split} 3\,x-3\,y-6\,z &=3, \\ 2\,x + 4\,y + z &= 1 ,\\ -x + y + 2\,z &= 1. \end{split} \]
We apply the same procedure as in the previous example: forward elimination. The procedure to be used expresses some of unknowns in terms of others by eliminating certain unknowns from all the equations except one. To begin, we eliminate x from every equation except the first one by adding -2/3 times the first equation to the second and 1/3 times the first equation to the third. The result is the following new system:
\[ \begin{bmatrix} \phantom{-}1&-1&-2&1 \\ \phantom{-}2&\phantom{-}4&\phantom{-}1&1 \\ -1&\phantom{-}1&\phantom{-}2&1 \end{bmatrix} \qquad \Longrightarrow \qquad \begin{bmatrix} 1&-1&-2&1 \\ 0&\phantom{-}6&\phantom{-}5&3 \\ 0&\phantom{-}0&\phantom{-}0&2 \end{bmatrix} . \]
Since the pivot is situated in the last column of the augmented matrix, the given system of equations has no solution because it is impossible to satisfy the equation 0 = 2.
End of Example 12

Some equations have to be interchanged if the corner elements \( A_{11}, \ A_{22}^{(1)} \) are not all zeroes to allow Gauss elimination to work. In the following, \( A_{ij}^{(n)} \) is the element after the nth iteration. One method is: if \( A_{kk}^{(k-1)} = 0, \) than search for an element \( A_{pk}^{(k-1)} \) with p > k that is not zero and interchange the pth and the nth equation. This strategy fails only if the set is singular and has no solution at all.
Example 13: When using the Gaussian elimination technique, you can at any time exchange rows, meaning that you can switch any two rows an unlimited number of times. This is very helpful if your matrix contains a 0 in the (1,1) position.

For example, consider the system of equations

\[ \begin{split} 2\,x_2 + x_3 &=-8, \\ x_1 -2\,x_2 -3\,x_3 &= 0 ,\\ -x_1 + x_2 + 2\,x_3 &= 3; \end{split} \qquad \begin{bmatrix} \phantom{-}0&\phantom{-}2&\phantom{-}1&-8 \\ \phantom{-}1&-2&-3&\phantom{-}0 \\ -1&\phantom{-}1&\phantom{-}2&\phantom{-}3 \end{bmatrix} . \]
Because we cannot have 0 in the (1,1) position, we swap row 1 and row 2:
\[ \begin{bmatrix} \phantom{-}1&-2&-3&\phantom{-}0 \\ \phantom{-}0&\phantom{-}2&\phantom{-}1&-8 \\ -1&\phantom{-}1&\phantom{-}2&\phantom{-}3 \end{bmatrix} . \]
Adding row 1 to row 3 gives
\[ \begin{array}{c} 1 \cdot [\mbox{equation }1] \\ + [\mbox{equation }3] \\ \hline [\mbox{new equation }3] \end{array} \qquad \begin{array}{ccccc} x_1 &-2\,x_2& -3\, x_3& =&0 \\ -x_1&+ x_2 & +2\,x_3& =&3 \\ \hline &-x_2& -x_3 & =& 3 \end{array} \]
Now we swap row 2 and row 3:
\[ \begin{bmatrix} 1&-2&-3&\phantom{-}0 \\ 0&-1&-1&\phantom{-}3 \\ 0&\phantom{-}2&\phantom{-}1&-8 \end{bmatrix} . \]
Adding 2 times row 2 to row 3 gives us
\[ \begin{bmatrix} 1&-2&-3&\phantom{-}0 \\ 0&-1&-1&\phantom{-}3 \\ 0&\phantom{-}0&-1&-2 \end{bmatrix} . \]
Now add (-1) times row 3 to row 2 and add -3 times row 3 to row 1 which gives us
\[ \begin{bmatrix} 1&-2&\phantom{-}0&-4 \\ 0&-1&\phantom{-}0&\phantom{-}5 \\ 0&\phantom{-}0&-1&-2 \end{bmatrix} . \]
Add (-2) times row 2 to row 1
\[ \begin{bmatrix} 1&\phantom{-}0&\phantom{-}0&-4 \\ 0&-1&\phantom{-}0&\phantom{-}5 \\ 0&\phantom{-}0&-1&-2 \end{bmatrix} . \]
Finally, we multiply rows 2 and 3 by -1 to obtain Gauss-Jordan form:
\[ \begin{bmatrix} 1&0&0&-4 \\ 0&1&0&-5 \\ 0&0&1&\phantom{-}2 \end{bmatrix} . \]
This gives x1 = -4, x2 = -5, and x3 = 2.
End of Example 13

Obviously one of the problems with the process of row reducing a matrix is the potential for human arithmetic errors. It is clear that computers can do this job much better than any human beeing. We can deepen our understanding of how the row reduction process works, and simultaneously eliminate arithmetic mistakes, by using a computer algebra system in a step-by-step fashion.

Example 14: Consider the system of linear equations
\[ \begin{cases} 2\,x + 3\,y + z &= -1, \\ 4\,x +7\,y + 5\,z &= 5, \\ x -2\,y +2\,z &= 11 . \end{cases} \tag{7.1} \]
First, we form the augmented matrix
\[ {\bf M} = \left[ \begin{array}{ccc|c} 2&3&1&-1 \\ 4&7&5&5 \\ 1&-2&2&11 \end{array} \right] . \]
The idea of the elimination procedure is to reduce the augmented matrix to equivalent "upper triangular" matrix. So the first question is how to determine pivots. Depending on this choice, we get the corresponding row echelon form. Therefore, the Gaussian algorithm may lead to different row echelon forms; hence, it is not unique.

From computational point of view, the pivot should be the largest element. If we look at the first column, the largest entry is in the second row. To move it into the first row and choose "4" as a pivot, we need to swap first two rows. It is convenient, but not necessarily, to have 1 in this position, which we call pivot.

Mathematica stores vectors and matrices as lists and lists of lists, respectively. We enter the augmented matrix, say M, row-wise in Mathematica with the command

M = {{2, 3, 1, -1}, {4, 7, 5, 5}, {1, -2, 2, 11}}
to which the program responds with the somewhat unenlightening output. To see the output in matrix format, enter an option MatrixForm or TraditionalForm.

We can access row 1 of the matrix A with the syntax M[[1]], and similarly work with any other row of the matrix. This allows us to execute elementary row operations.

To work on the row reduction in the desired example, we first want to swap rows 1 and 3; this is accomplished by using a temporary variable “temp” in the following way:

A1 = M;
temp = A1[[1]];
A1[[1]] = A1[[3]];
A1[[3]] = temp; MatrixForm[A1]
\( \displaystyle \begin{pmatrix} 1 & -2&2&11 \\ 4&7&5&5 \\ 2&3&1&-1 \end{pmatrix} \)
Note that this output stores the result of this row operation in the matrix A1, which is convenient for use in the next step. The semicolon (;) suppresses displaying the output of the command that precedes it. After executing the most recent set of commands, the following matrix will appear on the screen: \[ \begin{pmatrix} 1 & -2&2&11 \\ 4&7&5&5 \\ 2&3&1&-1 \end{pmatrix} . \] To perform row replacement, our next step is to add (−4) · R1 to R2 (where rows 1 and 2 are denoted R1 and R2) to generate a new second row; similarly, we will add (−2) · R1 to R3 for an updated row 3. The commands that accomplish these steps are
A2 = A1;
A2[[2]] = -4*A1[[1]] + A1[[2]];
A2[[3]] = -2*A1[[1]] + A1[[3]];
MatrixForm[A2]
\( \displaystyle \begin{pmatrix} 1 & -2&2&11 \\ 0&15&-3&-39 \\ 0&7&-3&-23 \end{pmatrix} \)
Next we will scale row 2 by a factor of 1/15 and row 3 by −1/7 and then add the results using the command
A3 = A2; A3[[2]] = 1/15 * A2[[2]]; A3[[3]] = -1/7 * A2[[3]]; A3[[3]] = A3[[3]] + A3[[2]]; MatrixForm[A3]
\( \displaystyle \begin{pmatrix} 1 & -2&2&11 \\ 0&1&-\frac{1}{5}&-\frac{13}{5} \\ 0&0&\frac{8}{35}&\frac{24}{25} \end{pmatrix} \)
Now we convert the matrix output into system of equations that is equivalent to the original one (1.1)
\[ \begin{cases} x -2\,y +2\,z &= 11 , \\ y - \frac{1}{5}\,z &= -\frac{13}{5} , \\ \frac{8}{35}\, z &= \frac{24}{35} . \end{cases} \tag{1.2} \]
This system of equations (1.2) is much easier than the given one (1.1). To solve the last equation in (1.2) we don't need Mathematica, so z = 3. Substituting this value of z into the first two equations, we get
\[ \begin{cases} x -2\,y +6 &= 11 , \\ y - \frac{3}{5} &= -\frac{13}{5} . \end{cases} \]
Again, the last equation has a solution y = 2. Hence the first equation is reduced to \[ x +4 +6 = 11 \qquad \Longrightarrow \qquad x = 1. \] We check with Mathematica:
RowReduce[M]
{{1, 0, 0, 1}, {0, 1, 0, -2}, {0, 0, 1, 3}}

Another option is to choose "2" in the first row as pivot because it is not zero. We want to start with a 1 in the first position of the first column. To achieve it, we can either divide each entry in the first row by 2 or switch the first and the third rows because the last row has "1" in the first position already. If we stick with the latter decision, we may want to begin by scaling the first row by 1/2.

\[ {\bf M} \,\sim\, \left[ \begin{array}{ccc|c} 1&3/2&1/2&-1/2 \\ 4&7&5&5 \\ 1&-2&2&11 \end{array} \right] \,\sim\, \left[ \begin{array}{ccc|c} 1&3/2&1/2&-1/2 \\ 0&1&3&7 \\ 1&-2&2&11 \end{array} \right] . \]

The first argument is the index of the row to be replaced, the second argument is the row to form a multiple of, and the final argument is the scale factor. Thus, M.add_multiple_of_row(n,m,A) would replace row n with (row n)+A*(row m).

\[ {\bf M} \,\sim\, \left[ \begin{array}{ccc|c} 1&3/2&1/2&-1/2 \\ 4&7&5&5 \\ 1&-2&2&11 \end{array} \right] \,\sim\, \left[ \begin{array}{ccc|c} 1&3/2&1/2&-1/2 \\ 0&1&3&7 \\ 0&-7/2&3/2&23/2 \end{array} \right] \]
or without scaling
\[ {\bf M} \,\sim\, \left[ \begin{array}{ccc|c} 2&3&1&-1 \\ 0&1&3&7 \\ 0&-7&3&23 \end{array} \right] \,\sim\, \left[ \begin{array}{ccc|c} 1&3/2&1/2&-1/2 \\ 0&1&3&7 \\ 0&-7/2&3/2&23/2 \end{array} \right] . \]

Since the last entry in the first column is already zero we can move on to the second column. Our first step is to get a 1 in the second position of the second column. Normally we would do this by scaling the row but in this case we can swap the second and third row

\[ {\bf M} \,\sim\, \left[ \begin{array}{ccc|c} 2&3&1&-1 \\ 0&1&3&7 \\ 0&0&24&72 \end{array} \right] . \]

Now we want to eliminate the 5 below the 1. This is done by multiplying the second row by −5 and adding it to the third row.

To get a 1 as the last entry in the third column we can scale the third row by −1/2

At this point, the matrix is in echelon form (well, having the 1's down the diagonal of the matrix is not required, but it does make our work easier). All that remains to find the solution is to put the matrix in reduced echelon form which requires that we replace all entries in the first three columns that are not on the main diagonal (where the 1's are) with zeros. We will start with the third column and work our way up and to the left. Remember that this is an augmented matrix and we are going to ignore the right-most column; it just "goes along for the ride."

Therefore, our solution is complete. We see that the solution to our linear system is
\( x_1 =2, \ x_2 = -2, \ \mbox{and} \ x_3 = 3. \)

End of Example 14

The following subroutine was prepared by Nasser M. Abbasi (see his website for possible updates: https://12000.org/my_notes/rref/index.htm).

displayREF[Ain_?(MatrixQ[#]&),displayMat_:True,normalizePivot_:True]:=Module[ {multiplier,j,i,pivotRow,pivotCol,nRows,nCols,p,tmp,startAgain,A=Ain,n,m,pivotsFound={},keepEliminating,nIter,entry,debug=False}, Print[MatrixForm[A]]; {nRows,nCols} = Dimensions[A]; keepEliminating=True; n=1; m=1; nIter = 0; While[keepEliminating, nIter++; If[nIter>100, (*safe guard*) Return["Internal error. Something went wrong. Or very large system?",Module] ]; If[debug,Print["Row = ",n," column = ",m]]; If[m==nCols, keepEliminating=False , (*If[displayMat,Print@makeNiceMatrix[A,{n,m}]];*) If[A[[n,m]] =!= 0, Print["Pivot is A(",n,",",m,")"]; If[displayMat,Print@makeNiceMatrix[A,{n,m}]]; If[normalizePivot, If[A[[n,m]] =!= 1, A[[n,All]] = A[[n,All]]/A[[n,m]]; A=Simplify[A]; Print["Making the pivot 1 using row(",n,")= row(",n,")/A(",n,",",m,")"]; If[displayMat,Print@makeNiceMatrix[A,{n,m}]]; ] ]; If[n<nRows, Do[ If[A[[j,m]] =!= 0, multiplier = A[[j,m]]/A[[n,m]]; Print["Zeroing out element A(",j,",",m,") using row(",j,")=",multiplier,"*row(",n,")-row(",j,")"]; A[[j,m;;]] = A[[j,m;;]] - multiplier*A[[n,m;;]]; A=Simplify[A]; If[displayMat,Print@makeNiceMatrix[A,{n,m}]] ] ,{j,n+1,nRows} ]; ]; pivotsFound = AppendTo[pivotsFound,{n,m}]; If[n==nRows, keepEliminating=False , n++; If[m<nCols,m++] ] , (*pivot is zero. If we can find non-zero pivot row below, then exchange rows*) Print["Entry {",n,",",m,") is zero. Looking to find non-zero pivot below in order to exchange rows"]; If[n==nRows&&m==nCols, If[debug,Print["keepEliminating is set to false since reached end of matrix"]]; keepEliminating=False , (*If we can find non-zero pivot row below, then exchange rows*) If[n<nRows, If[debug,Print["Calling FirstPosition[A[[n+1;;,m]],_?(# =!=0)&]"]]; If[debug,Print["Where A=",A," and n+1=",n+1," and m=",m]]; p=SparseArray[A[[n+1;;,m]]]["NonzeroPositions"]; If[debug,Print["p=",p]]; If[Length[p]==0, If[m<nCols, If[debug,Print["p===Missing. So m<nCols, making column now from m=",m," to ",m+1]]; m++ , keepEliminating=False ] , (*found non zero pivot below. Exchange rows*) p=First@p[[1]]; Print["Found non zero pivot"]; tmp = A[[n,All]]; A[[n,All]] = A[[p+n,All]]; A[[p+n,All]] = tmp; A=Simplify[A]; Print["Exchanging row(",n,") and row(",p+n,")"]; If[displayMat,Print@makeNiceMatrix[A,{n,m}]] ] , If[m<nCols, m++ , keepEliminating=False ] ] ] ] ] ]; {A,pivotsFound} ];

Example 15: We consider a system of three equations that has infinite many solutions \[ \begin{cases} \phantom{5}x+ 2\,y + 3\,z &= 0 , \\ 4\,x + 5\,y + 6\,z &= 3, \\ 7\,x + 8\,y + 9\,z &= 9 . \end{cases} \] The matrix of this system \[ {\bf A} = \begin{bmatrix} 1 & 2& 3 \\ 4&5&6 \\ 7&8&9 \end{bmatrix} \] is not invertible because its determinant vanishes
A = {{1, 2, 3}, {4, 5, 6}, {7, 8, 9}};
Det[A]
0
Its kernel is spanned on the vector v = (1, −2, 1) as Mathematica advices us
NullSpace[A]
{{1, -2, 1}}
To find its general solution, we apply row reduction to the augmented matrix \[ {\bf M} = \begin{bmatrix} 1 & 2& 3&0 \\ 4&5&6&3 \\ 7&8&9&9 \end{bmatrix} . \]
b = {{0}, {3}, {6}};
M = Join[A, b, 2]
{{1, 2, 3, 0}, {4, 5, 6, 3}, {7, 8, 9, 6}}
In the code above, we define vector b as a column. Now we apply row reduction using subroutine displayRREF
M = {{1, 2, 3, 0}, {4, 5, 6, 3}, {7, 8, 9, 6}}; displaymat = True; normalizePivot = False; {result, pivots} = displayREF[M, displaymat, normalizePivot]
displayREF[M]
>>>>>>Starting forward Gaussian elimination phase using    \( \displaystyle \begin{pmatrix} 1 & 2& 3&0 \\ 4&5&6&3 \\ 7&8&9&6 \end{pmatrix} \)
pivot now is (1,1)
R(2) --> R(2) − (4) * R(1)    \( \displaystyle \begin{pmatrix} 1 & 2& 3&0 \\ 0&-3&-6&3 \\ 7&8&9&6 \end{pmatrix} \)
R(3) --> R(3) − (7) * R(1)    \( \displaystyle \begin{pmatrix} 1 & 2& 3&0 \\ 0&-3&-6&3 \\ 0&-6&-12&6 \end{pmatrix} \)
pivot now is (2,2)
Scaling pivot row so that the pivot element is one. ===>    \( \displaystyle \begin{pmatrix} 1 & 2& 3&0 \\ 0&1&2&-1 \\ 0&-6&-12&6 \end{pmatrix} \)
R(3) --> R(3) − (−6) * R(2)    \( \displaystyle \begin{pmatrix} 1 & 2& 3&0 \\ 0&1&2&-1 \\ 0&0&0&0 \end{pmatrix} \)
From this row reduces matrix, we conclude that \[ \begin{split} x + 2\,y + 3\, z &= 0 \\ y + 2\, z &= -1. \end{split} \] Hence, the general solution of the given system is \[ x = 2 + z , \quad y = -1 -2\,z , \] where z is a free variable. We check with Mathematica:
c = {0, 3, 6};
LinearSolve[A, c]
{2, -1, 0}
This answer tells us that when z = 0, the other variables become x = 2, y = −1.
End of Example 15

  1. For each set of matrices, determine which elementary operations can transform matrix A to matrix B.
    1. \[ {\bf A} = \begin{bmatrix} 1 & 5 & -3 & 2\\ 2 & 2 & 3 & 5\\ 0 & 2 & 4 & -3\\ \end{bmatrix} \qquad {\bf B} = \begin{bmatrix} 1 & 5 & -3 & 2\\ 0 & -8 & 9 & 1\\ 0 & 2 & 4 & -3\\ \end{bmatrix} ; \]
    2. \[ {\bf A} = \begin{bmatrix} 2 & 3 & 7 & 1\\ 0 & 4 & 3 & 5\\ 1 & 2 & 7 & 4\\ \end{bmatrix} \qquad {\bf B} = \begin{bmatrix} 2 & 3 & 7 & 1\\ 1 & 2 & 7 & 4\\ 0 & 4 & 3 & 5\\ \end{bmatrix} ; \]
    3. \[ {\bf A} = \begin{bmatrix} 3 & 2 & 1 & -1\\ 4 & 6 & 5 & 2\\ 1 & 7 & -2 & 3\\ \end{bmatrix} \qquad {\bf B} = \begin{bmatrix} 0 & -19 & 7 & -10\\ 4 & 6 & 5 & 2\\ 1 & 7 & -2 & 3\\ \end{bmatrix} . \]
  2. For the following problems, use row operations to solve the system of equations.
    1. \begin{equation*} x_{1}+2\,x_{2}=-3 \end{equation*} \begin{equation*} 2\,x_{1}+3\,x_{2}=4 \end{equation*}
    2. \begin{equation*} 3\,x_{1}+x_{2}=6 \end{equation*} \begin{equation*} 3\,x_{1}+3\,x_{2}=2 \end{equation*}
  3. In the following problems, solve the systems of linear equations given their augmented matrices.
    1. \begin{equation*} \begin{bmatrix} 1 & -1 & 2 & -1 & | & 5\\ 0 & 5 & -1 & 3 & | & 4\\ 0 & 0 & 3 & -1 & | & 5\\ 0 & 0 & 0 & 2 & | & 8\\ \end{bmatrix} \end{equation*}
    2. \begin{equation*} \begin{bmatrix} 1 & -2 & 2 & | & 4\\ 0 & 3 & -2 & | & 1\\ 0 & 0 & 0 & | & 3\\ \end{bmatrix} \end{equation*}
  4. For the following problems, solve the system of equations.
    1. \begin{equation*} 2\,x_{2}+x_{3}=3 \end{equation*} \begin{equation*} 2\,x_{1}+x_{2} -\,2x_{3}=-3 \end{equation*} \begin{equation*} 4\,x_{1}+4\,x_{2}-3\,x_{3}=-3 \end{equation*}
    2. \begin{eqnarray*} 3\,x_{1}-2\,x_{2}-1\,x_{3} &=-1 , \\ 2\,x_{1}-1\,x_{2}+2\,x_{3} &=4 , \\ x_{2}-2\,x_{3} &=-6. \end{eqnarray*}
  5. For the following problems, find the value or values of f, if any, which make the matrix an augmented matrix of a consistent linear system.
    1. \[ \begin{bmatrix} 2 & f & 3\\ -4 & -8 & -6 \end{bmatrix} ; \]
    2. \[ \begin{bmatrix} 1 & f & -3\\ 2 & 4 & -8 \end{bmatrix} ; \]
    3. \[ \begin{bmatrix} 2 & -1 & f\\ -4 & 2 & 6\\ \end{bmatrix} . \]
  6. For the following problems, use row operations to solve the system of equations.
    1. \begin{equation*} x_{1}+x_{2}-x_{3}=9 \end{equation*} \begin{equation*} x_{2}+3x_{3}=3 \end{equation*} \begin{equation*} -x_{1}-2x_{3}=2 \end{equation*}
    2. \begin{equation*} 4x_{1}+8x_{2}-4x_{3}=4 \end{equation*} \begin{equation*} 3x_{1}+8x_{2}+5_{3}=-11 \end{equation*} \begin{equation*} -2x_{1}+x_{2}+12x_{3}=-17 \end{equation*}
    3. \begin{equation*} x_{1}+2x_{2}+3x_{3}=4 \end{equation*} \begin{equation*} 2x_{1}+3x_{2}+3x_{3}=1 \end{equation*} \begin{equation*} 4x_{1}+2x_{2}+3x_{3}=1 \end{equation*}
  7. Determine whether the following augmented matrices represent consistent systems of equations and how many solutions exist.
    1. A= \begin{bmatrix} 1 & 4 & -2 & | & 8\\ 5 & 7 & -5 & | & 6\\ -3 & 2 & -6 & | & 6\\ \end{bmatrix}
    2. A= \begin{bmatrix} 1 & 1 & 1 & | & 3\\ 2 & 0 & -1 & | & 1\\ 0 & 1 & -1 & | & 0\\ \end{bmatrix}
    3. A= \begin{bmatrix} 1 & -1 & -1 & | & 1\\ -1 & 2 & 2 & | & -4\\ 3 & 2 & 1 & | & -12\\ \end{bmatrix}
    4. A= $\begin{bmatrix} 3 & 1 & -3 & | & 1\\ -1 & 2 & 2 & | & -1\\ 2 & 3 & -1 & | & 1\\ \end{bmatrix}$
  8. Determine whether the system with complex coefficients is consistent. If it is, find the general solution.
    1. \( \displaystyle \begin{split} z_1 - {\bf j}\,z_2 &= 2 - {\bf j} , \\ 3\,z_1 + \left( 1 - 2{\bf j} \right) z_2 &= 5 - 2 {\bf j} ; \end{split} \)
    2. \( \displaystyle \begin{split} \left( 1 - 2{\bf j} \right) z_1 - {\bf j}\, z_2 &= 1 + 2{\bf j} , \\ \left( 1 + 3{\bf j} \right) z_1 - \left( 1 + {\bf j} \right) z_2 &= {\bf j} -3 ; \end{split} \)
    3. \( \displaystyle \begin{split} \left( 1 + {\bf j} \right) z_1 + 2\,z_2 &= 1 + {\bf j} , \\ \left( 1 - {\bf j} \right) z_1 + {\bf j} \, z_2 &= 1 - {\bf j} ; \end{split} \)
    4. \( \displaystyle \begin{split} z_1 + 2\, z_2 + \left( 3 = 2{\bf j} \right) z_3 &= 5 + {\bf j} , \\ 2{\bf j}\, z_1 + 7{\bf j}\, z+2 + \left( 4 + 7 {\bf j} \right) z_3 &= 2 + 21 {\bf j} . \end{split} \)

  1. Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International
  2. Beezer, R., A First Course in Linear Algebra, 2015.
  3. Beezer, R., A Second Course in Linear Algebra, 2013.
  4. Casady, Chelsea, Gaussian Elimination and its History, Liberty University
  5. Davis, E., Linear Algebra and Probability for Computer Science Applications, CRC Press, 2012.
  6. Grcar, Joseph F., Mathematicians of Gaussian Elimination, Notes of American Mathematical Society, 58, No 6, 2011, 782--792.
  7. Gaussian elimination for dummies
  8. Examples of Gaussian Elimination, Dartmouth College.
  9. Higham, Nicholas, Gaussian Elimination, Manchester Institute for Mathematical Sciences, School of Mathematics, The University of Manchester, 2011.
  10. Systems of Linear Equations
  11. Elementary Row Operations
  12. Elementary Row Operations for Matrices
  13. Automatic Row Reduction Input Form, A program that performs Gaussian elimination similarly to a human working on paper.
  14. Reduce Row Echelon Form on line by Kardi Teknomo.
  15. An online tool for performing elementary operations with rows and columns,
  16. Gauss--Jordan Elimination Calculator.