Various modifications have been made to the method of elimination, and we present one of these, known as the Gauss--Jordan method.
It was the German geodesist Wilhelm Jordan
(1842-1899) and not a French mathematician Camille Jordan (1838-1922) who introduced the Gauss-Jordan method of solving systems of linear equations. Camille Jordan is credited for Jordan normal form, a
well known linear algebra topic.
REFF application:
We apply row operation to reduce matrix B to upper triangular matrix. To achieve this, we use the following subroutine written by Nasser M. Abbasi (https://12000.org/my_notes/rref/index.htm).
displayRREF[Ain_?(MatrixQ[#]&),displayMat_:True,normalizePivot_:True]:=Module[
{multiplier,j,i,pivotRow,pivotCol,nRows,nCols,p,tmp,startAgain,A=Ain,n,m,pivotsFound,keepEliminating,nIter,entry,debug=False},
{A,pivotsFound} = displayREF[Ain,displayMat,normalizePivot];
If[debug,Print["Completed forward"]];
If[debug,Print["pivotsFound",pivotsFound]];
(*If[displayMat,Print@makeNiceMatrix[A,{n,m}]];*)
Print[">>>>>>Starting backward elimination phase. The pivots are ",pivotsFound];
Do[
pivotRow=First@entry;
pivotCol=Last@entry;
If[debug,Print["pivotRow= ",pivotRow]];
If[debug,Print[" pivotCol= ", pivotCol]];
If[pivotRow>1,
Do[
If[ A[[i,pivotCol]] =!= 0,
Print["Zeroing out element A(",i,",",pivotCol,") using row(",i,")=row(",i,")-A(",i,",",pivotCol,")*row(",pivotRow,")"];
A[[i,;;]] = A[[i,;;]] - A[[i,pivotCol]]*A[[pivotRow,;;]];
A=Simplify[A];
If[displayMat,Print@makeNiceMatrix[A,{pivotRow,pivotCol}]]
]
,
{i,pivotRow-1,1,-1}
]
]
,
{entry,pivotsFound}
];
{A,pivotsFound}
];
(*----------------------------------------------------------*)
makeSolutionSpecialCase[A_?(MatrixQ[#]&),b_?(VectorQ[#]&),pivotCols_List]:=Module[
{nRows,nCols,nLeadingVariables,nFreeVariables,n,m,k,variables={},eq,freeVariables,sol={}},
Print["Pivot columns are ",MatrixForm[pivotCols]];
ClearAll[x,t]; (*did not make them local, to prevent $ from showing in print*)
{nRows,nCols} = Dimensions[A];
nLeadingVariables = Length[pivotCols];
nFreeVariables = nCols-nLeadingVariables;
Print["There are ",nLeadingVariables," leading variables and ",nFreeVariables," free variables. These are "];
Array[t, nFreeVariables];
Array[x, nCols];
m=0;k=0;
Do[
If[Not[MemberQ[pivotCols,n]],
m++;
Print[x[n]," is a free variable. Let ",x[n],"=",t[m]];
AppendTo[variables,t[m]];
AppendTo[sol,0]
,
Print[x[n]," is a leading variable"];
AppendTo[variables,x[n]];
AppendTo[sol,b[[++k]]]
]
,{n,1,nCols}
];
freeVariables=(t[#]&/@Range[nFreeVariables]);
Print["Hence the system after RREF is the following>>>>>>"];
Print[MatrixForm[A . variables],"=",MatrixForm[b]];
Print["There is different solution for different value of the free variables."];
Print["Setting free variable ", freeVariables," to zero gives"];
variables=variables/.((t[#]->0)&/@Range[nFreeVariables]);
Print[MatrixForm[A . variables],"=",MatrixForm[b]];
Print["Therefore the final solution is "];
Print[MatrixForm[x[#]&/@Range[nCols]],"=",MatrixForm[sol]]
]
makeNiceMatrix[mat_?MatrixQ,pivot_List]:=Module[{g,nRow,nCol},
{nRow,nCol} = Dimensions[mat];
g = Grid[mat,Frame->{None,None,{pivot->True}}];
MatrixForm[{{g}}]
]
(*thanks to http://mathematica.stackexchange.com/questions/60613/how-to-add-a-vertical-line-to-a-matrix*)
(*makes a dash line inside Matrix*)
Format[matWithDiv[n_,opts:OptionsPattern[Grid]][m_?MatrixQ]]:=MatrixForm[{{Grid[m,opts,Dividers->{n->{Red,Dashed}}]}}];
■
End of Mathematica script
Reduce Row Reduction Form
Amazingly, there’s a fairly simple procedure, called the Gauss–Jordan
algorithm—that finds all solutions of any n × m linear system, no
matter how big m and n are. Admittedly, it can require many steps.
In such cases humans get tired, bored, and tend to make mistakes.
Computers, however, have none of those problems, and can solve systems with millions of equations and millions of unknowns.
Elimination produces an upper triangular system, called row echelon form for Gauss elimination and reduced row echelon form for Gauss--Jordan algorithm. The Gauss elimination introduces zeroes below the pivots, while Gauss--Jordan algorithm contains additional phase in which it introduces zeroes above the pivots.
Since this phase involves roughly 50% more operations than Gaussian elimination, most computer algorithms are based on the latter method.
In this section, we discuss the algorithm for reducing any matrix, whether or not the matrix is viewed as an augmented matrix for linear system, to a simple form that could be used for further analysis and/or computation. This form is known as the reduced row echelon form, which is usually abbreviated as rref. This algorithm utilizes row operations to put the augmented matrix into a special form, which is usually called as Gauss--Jordan elimination; it consists of two stages, a forward phase (usually referred to as Gaussian elimination) in which matrix is transferred to a kind of upper triangular or "steplike" matrix (see the previous section), and a backward phase in which zeroes are introduced in each column containing a pivot using row operations. When a linear system of equations is solved numerically, it nearly always uses Gaussian elimination. The second stage is not so essential for solving linear systems of equations because the Gauss--Jordan method is inefficient for practical calculations. The main reason for this is that the Gaussian elimination uses backward substitution while the Gauss--Jordan algorithm suggests to implement backward elimination, which involves roughly 50% more operations than Gaussian elimination. The Gauss--Jordan elimination is an essential algorithm for solving other problems such as matrix equations, but for linear systems of equations it is pure theoretical.
The basic idea of the backward elimination was popularized by the German engineer Wilhelm Jordan in his book on geodesy (the science of measuring Earth shapes) entitled Handbuch der Vermessungskunde and published in 1888. However, the method was also appeared in an article by Clasen published in the same year. Jordan and Clasen probably discovered Gauss--Jordan elimination independently.
Johann Carl Friedrich Gauss (1777--1855) was a German mathematician and physicist who made significant contributions to many fields in mathematics and sciences.
Wilhelm Jordan (1842--1899) was a German geodesist who conducted surveys in Germany and Africa and founded the German geodesy journal.
Since the first stage of Gauss--Jordan algorithm was discussed in the previous section, it is assumed that the reader is familiar with it. For further exposition, we need some new definitions along with the previously discussed terminology. The aim of the Gaussian elimination is to reduce a full system of linear equations in n unknowns to triangular form using elementary row operations, thereby reducing a problem that we can’t solve to one that we can. Mathematica has a special command, RowReduce, for the Gauss--Jordan algorithm. The only unfortune aspect of using the command RowReduce is that it does not actually give the solution to the original system. You must still write out the solution yourself from rref matrix. On the other hand, the Solve command gives the actual solution to the system where you can specify which variables to solve for in the system. In this case, you should know in advance the dimension of the solution set.
Two matrices are row-equivalent if one can be obtained from the other by a sequence of row operations.
0
p1
0
0
0
0
0
p2
0
0
0
0
0
p3
0
0
0
0
0
0
pr
0
0
0
0
0
0
0
rank r
Theorem 1:
Suppose that two matrices are row-equivalent augmented matrices. Then the systems of linear equations that they represent are equivalent systems.
A matrix is in reduced row echelon form (rref) if it meets all of the following conditions:
If there is a row (called a zero row) where every entry is zero, then this row lies below any other row that contains a nonzero entry.
The first nonzero entry of a nonzero row is a 1. This is called a leading 1 and its location in the matrix is referred to as the pivot position. A column that contains a pivot position is called a pivot column.
The leftmost nonzero entry of a row is the only nonzero entry, the leading 1, in its column.
Any column in which a leading 1 appears has zeroes in every other entry.
Theorem 2:
Every matrix has a unique reduced row echelon form.
Suppose that A is an m×n matrix and that is row equivalent to two m×n matrices B and C in reduced row-echelon form. We need to show that B = C.
If B and C are both row-equivalent to A, then they are row-equivalent to each other. Repeated row operations on a matrix combine the rows with each other using operations that are linear, and are identical in each column. A key observation for this proof is that each individual row of B is linearly related to the rows of C. This relationship is different for each row of B, but once we fix a row, the relationship is the same across columns. More precisely, there are scalars δij, i,j = 1,2, ... , m, such that
So an entry of row i of matrix B (in column j) is a
linear function of the entries of all the rows of matrix C that are
also in column j, and the scalars (δik) depend on which row
of B we are considering (the i subscript on δik), but
are the same for every column (no dependence on j in δik).
We now repeatedly exploit the fact that B and C are in reduced
row-echelon form. Recall that a pivot column is all zeroes, except a single
one. More carefully, if R is a matrix in reduced row-echelon form, and
dℓ is the index of a pivot column, then
[R]kdℓ=1 precisely when k=ℓ and is
otherwise zero. Notice also that any entry of R that is both below the
entry in row ℓ and to the left of column dℓ is also zero
(with below and left understood to include equality). In other words, look at
examples of matrices in reduced row-echelon form and choose a leading 1 (with
a box around it). The rest of the column is also zeros, and the lower left
“quadrant” of the matrix that begins here is totally zeros.
Assuming no relationship about the form of B and C, let B
have r nonzero rows and denote the pivot columns as D =
{ d1, d2, ... , dr |. For C, let
p denote the number of nonzero rows and denote the pivot columns as
P = \{ e1, e2, ... , ep }. There are
four steps in the proof, and the first three are about showing that B
and C have the same number of pivot columns, in the same places. In
other words, the “e” symbols are a necessary fiction.
The entries of C are all zero since they are left and below of the
leading 1 in row 1 and column e1 of C. This is a
contradiction, so we know that d1 ≥ e1.
Together this means that d1 = e1.
Second Step. Suppose that we have determined that
d1 = e1,
d2 = e2, ... ,
du = eu. Let us now show that
du+1 = eu+1. Working towards a
contradiction, suppose that
du+1 < eu+1. For 1 ≤ ℓ ≤ u.
So row r is a totally zero row, contradicting that this should be the
bottom most nonzero row of B. Hence, u ≥ r. By an
entirely similar argument, reversing the roles of B and C, we
would conclude that u ≤ r, and therefore u = r.
Thus, combining the first three steps we can say that D = P.
In other words, B and C have the same pivot columns, in the
same locations.
Fourth Step. In this final step, we will not argue by contradiction. Our
intent is to determine the values of the δij. Notice that we can
use the values of the di interchangeably for B and
C. Here we go,
So B and C have equal values in every entry, and so are the same
matrix.
■
input m, n and A
r := 0
for j := 1 to n
i := r+1
while i <= m and A[i,j] == 0
i := i+1
if i < m+1
r := r+1
swap rows i and r of A (row op 1)
scale A[r,j] to a leading 1 (row op 2)
for k := 1 to m, k <> r
make A[k,j] zero (row op 3, employing row r)
output r and A
Matlab code:
function sol=GaussJ(Ab)
[m,n]=size(Ab);
for i=1:m
Ab(i,:)=Ab(i,:)/Ab(i,i);
for j=1:m
if j==i; continue;
end
Ab(j,:)=Ab(j,:)-Ab(j,i)*Ab(i,:);
end; end; sol=Ab;
As we see, the given matrix A is reduced to the rref form that contains
three pivots in columns 1, 3, and 6. Therefore, the corresponding system of
linear equations has three leading variables x1,
x3, and x6, and three free variables
x2, x4, x5.
Solving for the leading variables (marked in red), we obtain
It is clear that the rref matrix is independent of the order of the equations,
or equivalently, the rows in the augmented matrix associated with the linear
system. Now we solve the given system of linear equations using standard
Solve command:
Every homogeneous system of linear equations is consistent because all such systems have a trivial solution: x1 = x2 = ... = xn = 0. If there are other solutions, they are called nontrivial solutions. So there are only two possibilities for homogeneous systems of equations:
The system has only the trivial solution.
The system has infinitely many solutions in addition to the trivial
solution.
Example 3:
Use Gauss-Jordan elimination to solve the homogeneous linear system:
The first elimination step is to eliminate the elements a21 =
2 and a31 = 3 in the original matrix A =
[aij] by subtracting the multiples m21 =
2 and m31 = 3 of row 1 from rows 2 and 3, respectively,
which gives
This completes the forward elimination stage required by the Gaussian
elimination.
To obtain the reduced row echelon form, we first need to divide the second row
and the third row by -5 and -10, respectively, in order to see leading 1's on
the main diagonal:
The first elimination step in the backward stage is to eliminate the
element a21 = 2 in the first row by subtracting 2 times the
second row from the first one:
The second elimination step in the backward stage is to eliminate the
elements a31 = 1 and a32 = -1. So we
add the last row to the second one and subtract it from the first row. This
yields
A = {{1, 2, -1}, {2, -1, 3}, {3, -3, -4}}
b = {6, -3, 1}
LinearSolve[A, b]
{1, 2, -1}
The same output is obtained with the reduced row echelon form:
GaussianElimination[A_?MatrixQ, b_?VectorQ] :=
Last /@ RowReduce[Flatten /@ Transpose[{A, b}]]
A = {{1, 2, -1}, {2, -1, 3}, {3, -3, -4}}
b = {6, -3, 1}
GaussianElimination[A, b]
This completes the Gauss-Jordan elimination procedure. Obviously, the original set of equations has been transformed to a diagonal form. Now expressing the set in algebraic form leads to
\[
x_1 =1, \quad x_2 =2 , \quad x_3 =-1 ,
\]
which is the required solution of the given system.
End of Example 4
■
Example 5:
Consider the system of linear equations
We first define B(1) to be the original augmented matrix.
Then, we denote by B(2) the result of
the first elementary row operation, which entails subtracting 3 times the
first row from the second in order to eliminate x1
from the second equation:
The naïve Gaussian Elimination method includes two stages:
Forward Elimination of Unknowns: the unknown is eliminated in each
equation starting with the first equation. This way, the equations are reduced to one equation and one unknown in each equation.
Back Substitution: starting from the last equation, each of the unknowns
is found.
We demonstrate how the method works in the following example.
Example 7:
Example: Consider four equations with four unknowns. First, we enter data.
n = 4 (* number of equations *)
A = Table[
1 + 10^i *j^2 + j*Floor[2.5^(i + j), {i, 0, n - 1}, {j, 0, n - 1}]; A // MatrixForm
Remember that we write vector b in column form, but Mathematica keeps the its entries as a list. Next, we build the augmented matrix from A and b:
Aug = Transpose[Append[Transpose[A], b]]; Aug // MatrixForm
Now we perform forward elimination that consists of (n-1) steps. In each step k, the coefficient of the kth unknown will be zeroed from every subsequent equation that follows the kth row. For example, in step 2 (i.e., k = 2), the coefficient of x2 will be zeroed from rows 3..n. With each step that is conducted, a new matrix is generated until the coefficient matrix is transformed to an upper triangular matrix. The following procedure calculates the upper triangular matrix produced for each step k.
Print["Start", Aug // MatrixForm]; Print[" "]
For[k = 1, k <= n - 1, k++,
For[i = k + 1, i <= n, i++,
multiplier = N[Aug[[i, k]]/Aug[[k, k]]];
For[j = k, j <= n + 1, j++,
Aug[[i, j]] = Aug[[i, j]] - multiplier*Aug[[k, j]]]];
Print["Step=", k, " ", Aug // MatrixForm]; Print[" "]]
This is the end of the forward elimination stage. The new upper triangular coefficient matrix A1 and right hand side array b1 permit solving for the solution vector using backward substitution. It begins with solving the last equation as it has only one unknown:
Are there any pitfalls of the Naïve Gauss elimination method?
Yes, there are two pitfalls of the Naïve Gauss elimination method. Division by zero: It is possible for division by zero
to occur during the beginning of the n - 1 steps of forward elimination.
Round-off error: The Naïve Gauss elimination method is prone to round-off errors. This is
true when there are large numbers of equations as errors propagate. Also, if there is
subtraction of numbers from each other, it may create large error. ■
Algorithm: Homogeneous Equations
Most of the work required to solve an arbitrary system in reduced row echelon form
arises in solving the related system we get by setting all right-hand
sides to zero.
A linear system is called homogeneous if the right-
hand side of each equation is zero. Otherwise, the system is inhomogeneous. Given an inhomogenous system A x = b, we call A x = 0
its homogeneous version. Both systems have the same coefficient
matrix A, but their augmented matrices differ in the last column.
Observe that a homogeneous system A x = 0
always has at least one solution, namely the trivial one x = 0.
Suppose the coefficient matrix of a system has reduced row echelon form. If column j contains a pivot, we call it a pivot column, and
we call xj a pivot variable. Otherwise we say the column or variable is free.
Each column of the coefficient matrix, and each variable, is one or the
other: pivot or free.
We now give a simple five-step procedure
for finding all solutions to a homogeneous system in reduced row echelon form. After
listing the steps, we illustrate with some typical examples.
Identify the free columns—let’s say there are k of
them—and delete (or cross out) the rows of zeros.
Write down k blank generating vectors h₁, h₂,
… , hk in 𝔽n×1. (Write them vertically to make Step 4 a bit
easier.)
Set all free variables equal to 1 or 0: For each j =
1, 2, … , k, select hj, and set the jth free variable in it equal
to 1. Set all other free variables in hj equal to zero.
Fill in the pivot variables. For each j = 1, 2, … , k,
identify the jth free column in the matrix. Set the pivot variables of hj equal to the opposites of the entries in that column, in order of their appearance, top to bottom.
Express the complete solution set as all linear combinations of the hj’s:
\[
{\bf x}_p = {\bf H}\,{\bf s} = s_1 {\bf h}_1 + s_2 {\bf h}_2 + \cdots + s_k {\bf h}_k ,
\]
where H = [h₁, h₂, … , hk] is the matrix having the hi’s as columns, while allowing s = (s₁, s₂, … , sk)
to be any vector in 𝔽k.
Example 7:
Let us consider the homogeneous system with coefficient matrix
\[
{\bf B} = \begin{bmatrix} 0&1&2&0&3 \\ 0&0&0&1&4 \\ 0&0&0&0&0 \end{bmatrix} .
\]
The matrix has the reduce row echelon form, so we can apply the algorithm
Step 1: identify the free variables and delete the rows of zeros.
Columns 2 and 4 have pivots, so x₁, x₃, and x₅ are free. Three free
variables makes k = 3 . There is one row of zeros, so we discard it.
Our matrix now looks like this:
\[
\begin{bmatrix} 0&1&2&0&3 \\ 0&0&0&1&4 \end{bmatrix} .
\]
Step 2: Set up k “blank” vertical generating vectors h₁, h₂, h₃ ∈ ℝ5. So we get three generators with blank entries, entries
represented here by p’s (for pivot variables) and f’s (free variables):
\[
{\bf h}_1 = \left[ \begin{array}{c} f \\ p \\ f \\ p \\ f \end{array} \right] , \qquad {\bf h}_2 = \left[ \begin{array}{c} f \\ p \\ f \\ p \\ f \end{array} \right] , \qquad {\bf h}_3 = \left[ \begin{array}{c} f \\ p \\ f \\ p \\ f \end{array} \right] .
\]
Step 3: Fill in the free variables. Taking each generating vector
h₁, h₂, h₃
in turn, set the jth free variable equal to 1 and all other free variables
equal to zero. That is, the first free variable (x₁) in h₁ is a 1, the
second free variable (x₃) in h₂ is a 1, the third free variable (x₅) in h₃
is a 1. All other free variables in all three generators are zeros. The
pivot variables x₂ and x₄ remain undetermined.
This fills in the entries for x₁, x₃, and x₅ in each generator:
\[
{\bf h}_1 = \left[ \begin{array}{c} 1 \\ p \\ 0 \\ p \\ 0 \end{array} \right] , \qquad {\bf h}_2 = \left[ \begin{array}{c} 0 \\ p \\ 1 \\ p \\ 0 \end{array} \right] , \qquad {\bf h}_3 = \left[ \begin{array}{c} 0 \\ p \\ 0 \\ p \\ 1 \end{array} \right] .
\]
Step 4: Set the pivot variables of each h₁, h₂, h₃ equal to the opposites of
the values in the jth free variable column of the non-zero rows of the
matrix—in order of their appearance, top to bottom.
We start with j = 1: The first free variable is x₁, so we take the
entries in the two remaining rows of column 1. In this example both
those entries are zero. We then use their opposites (still zero) to set
the pivot variables x₂ and x₄ of h₁. We thus get
\[
{\bf h}_1 = \left[ \begin{array}{c} \phantom{-}1 \\ -0 \\ \phantom{-}0 \\ -0 \\ \phantom{-}0 \end{array} \right] .
\]
(The minus signs are irrelevant here since −0 = 0, but they do show
which entries we just filled in.) It’s easy to check that h1 now solves
the homogeneous system—just dot it with each row of the coefficient
matrix: We get zero every time.
Now set j = 2. The second free variable is x₃, so we take the entries
in the two rows of column three: 2 and 0 . We then put minus 2 and
minus 0 in the pivot positions of the second generator to obtain
\[
{\bf h}_2 = \left[ \begin{array}{c} \phantom{-}0 \\ -2 \\ \phantom{-}1 \\ -0 \\ \phantom{-}0 \end{array} \right]
\]
Finally we complete h₃. The third free variable is x₅, so we take the
entries in column 5, namely 3 and 4, flip their signs to get −3 and
−4, and put these in the pivot positions to have
\[
{\bf h}_3 = \left[ \begin{array}{c} 0 \\ -3 \\ 0 \\ -4 \\ 1 \end{array} \right] .
\]
Step 5: Express the homogeneous solution as all linear combinations of the generating vectors:
\[
{\bf x}_h = c_1 {\bf h}_1 + c_2 {\bf h}_2 + c_3 {\bf h}_3 = {\bf H}\,{\bf c}, \qquad c_1 , c_2 , c_3 \in \mathbb{R} .
\]
Here, the full homogeneous solution set is expressed via matrix
\[
{\bf H} = \begin{bmatrix} 1& 0& 0 \\ 0& -2& -3 \\ 0& 1& 0 \\ 0& 0& -4 \\ 0& 0& 1 \end{bmatrix} .
\]
We check with Mathematica
B = {{0, 1, 2, 0, 3}, {0, 0, 0, 0, 4}, {0, 0, 0, 0, 0}};
H = {{1, 0, 0}, {0, -2, -3}, {0, 1, 0}, {0, 1, -4}, {0, 0, 1}};
B . H
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
■
End of Example 1
Example 7:
■
End of Example 1
Algorithm: Inhomogeneous Equations
Finding a particular solution to a system in reduced row echelon form is far simpler than finding the full homogeneous solution. Two quick steps are all it takes:
To produce a particular solution xp of a system in reduced row echelon form, write
down a blank vector xp ∈ 𝔽n×1, and then fill it in by
setting all free variables equal to zero;
setting each pivot variable equal to the rightmost entry in the
row of that pivot.
Theorem 3:
A pivot in the last column of the RREF of the augmented matrix means the system
has no solution.
We demonstrate with some examples.
Example 7:
Let us find a particular solution of the system with this
augmented matrix:
\[
{\bf B} = \left[ \begin{array}{cccc|c} 1&2&0&0&3 \\ 0&0&0&1&4 \end{array} \right] .
\]
The matrix B does have reduced row echelon form (the procedure doesn’t apply otherwise!), and there are four variables, so we write down a blank vector
xp ∈ 𝔽4×1:
\[
{\bf x}_p^{\mathrm T} = \left[ \underline{ }, \ \underline{ }, \ \underline{ }, \ \underline{ } \right] .
\]
The free variables are x2 and x3 , and the first instruction has us set
those equal to zero:
\[
{\bf x}_p^{\mathrm T} = \left[ \underline{ }, \ 0, \ 0, \ \underline{ } \right] .
\]
We fill in the pivot slots using the last column of the augmented matrix.
We find the x₁ pivot in row 1, where the last entry is 3, so we set
x₁ = 3. We find the x₄ pivot in row 2, where the last entry is 4, so
we set x₄ = 4 , and that’s it—we have a particular solution:
\[
{\bf x}_p^{\mathrm T} = \left[ 3, \ 0, \ 0, \ 4 \right] .
\]
We check the answer with Mathematica:
B = {{1, 2, 0, 0}, {0, 0, 0, 1}};
B . {3, 0, 0, 4}
{3, 4}
■
End of Example 1
Example 7:
Let us consider the system with augmented matrix
\[
{\bf B} = \left[ \begin{array}{cccc|c} 1&0&0&2&3 \\ 0&1&0&4&5 \\ 0&0&1&6&7 \end{array} \right] .
\]
The only free variable is x₄, so we put a zero in that slot. The right-
hand sides of the pivot rows are 3, 5, and 7, so we set the corresponding pivot variables equal to those values, and we’re done:
\[
{\bf x}_p^{\mathrm T} = \left[ 3,\ 5, \ 7,\ 0 \right] .
\]
Again, we check with Mathematica:
B = {{1, 0, 0, 2}, {0, 1, 0, 4}, {0, 0, 1, 6}};
B . {3, 5, 7, 0}
{3, 5, 7}
■
End of Example 1
Determine which matrices are in reduced echelon form and which others are only
in echelon form.
\[
(a)\ \begin{bmatrix} 1&0&0&0 \\ 0&1&0&3 \\0&0&1&6 \end{bmatrix}, \qquad
(b) \ \begin{bmatrix} 1&0&1&2&0&4 \\ 0&1&-2&2&0&3 \\ 0&0&0&0&1&5
\end{bmatrix}, \qquad (c) \ \begin{bmatrix} 1&0&1&0 \\ 0&1&1&0 \\ 0&0&0&1
\end{bmatrix},
\]
\[
(d)\ \begin{bmatrix} 1&0&0&0 \\ 0&1&1&0 \\ 0&0&0&0 \\ 0&0&0&1 \end{bmatrix},
\qquad (e) \ \begin{bmatrix} 1&2&0&1&4 \\ 0&2&0&3&3 \\ 0&0&0&2&2 \\
0&0&0&0&1 \end{bmatrix} , \qquad (f) \ \begin{bmatrix} 1&1&0&0 \\ 0&1&1&0 \\
0&0&1&1 \end{bmatrix} .
\]
Consider the two 3×4 matrices below
\[
{\bf B} = \begin{bmatrix} 1&3&-2&2 \\ -1&-2&-1&-1 \\ -1&-5&8&-3 \end{bmatrix},
\quad {\bf C} = \begin{bmatrix} 1&2&1&2 \\ 1&1&4&0 \\ -1&-1&-4&1 \end{bmatrix} .
\]
Row-reduce each matrix and determine that the reduced row-echelon forms of
B and C are identical. From this argue that B and C
are row-equivalent.
Perform Gauss--Jordan elimination on the following systems of equations, but
without partial pivoting (without swapping rows).
where x1, x2, and x3
denote prices in dollars of goods 1, 2, and 3, respectively. Find the prices
that solve the system using the Gauss-Jordan elimination method.
If matrix A represents an augmented matrix system of equations of the
form \( ax_{1}+bx_{2}+cx_{3}=d, \) find the values of \( x_{1}, x_{2} , \) and x3.