The numbers \( a_{ij} \) are known as the coefficients of the system. The m×n matrix
\( {\bf A} = [\,a_{ij}\,] , \) whose \( (i,\, j) \) entry is the
coefficient \( a_{ij} \) of the system of linear equations is called the coefficient matrix, and is denoted by
Let \( {\bf x} =\left[ x_1 , x_2 , \ldots x_n
\right]^{\mathrm T} \) be the column vector of unknowns, where "T" stands for transposition. Then the product
\( {\bf A}\,{\bf x} \) of the \( m \times n \) coefficient matrix A and the
\( n \times 1 \) column vector x is an \( m \times 1 \) matrix (column-vector)
whose entries are the right-hand sides of our system of linear equations.
If we define another column vector \( {\bf b} = \left[ b_1 , b_2 , \ldots b_m \right]^{\mathrm T} \)
whose components are the right-hand sides \( b_{i} ,\) the given system is equivalent to the vector equation
Sometimes a system of linear equations is known as a set of simultaneous equations; such terminology emphasizes that a solution is an assignment of values to each of the
\( n \) unknowns such that each and every equation holds with this assignment. It is also referred to simply as a linear system.
Mathematica has a built-in command LinearSolve[A,
b] that solves a linear system of equations, given a
coefficient matrix A and a vector of constants b. We need to learn some more theory
before we can entirely understand this command, but we can begin to explore its use.
For now, consider these methods experimental and do not let it replace row-reducing augmented matrices:
\[
{\bf A}\, {\bf x} = {\bf b} ,
\]
where A is a given matrix and b is specified
column vector.
The command LinearSolve[A, b] will provide the vector x of solutions to the linear system
\( {\bf A}\,{\bf x} = {\bf b} \) of equations with
coefficient matrix A and the vector of constants
b.
The system of algebraic equations Ax = b may have a unique
solution x, may have infinite many solutions, or may have no
solution at all.
Example 1:
Let us take a look
at the system of algebraic equations
The two equations define the same line on the x-y plane, meaning that we have infinitely many solutions x = 3−2y.
The corresponding singular (its determinant is zero) matrix A and nonhomogeneous term b are
Now we apply the Solve
command to
a system with no solutions:
x1 = LinearSolve[A, b]
Out[8]= {3, 0}
So we see that the vector v1 = [ 3, 0 ] gives a solution to
system (1.1) of equations. Is this solution unique? To answer this
question, we need to find the kernel (or null space) of matrix A:
NullSpace[A]
Out[10]= {{-2, 1}}
Hence, the null space is spanned on the vector [ -2, 1 ]T and the general solution
to the given system of linear equations Ax = b is
\[
x = 3\,t , \quad y = t ,
\]
where t is an arbitrary real number. We can also find the general
solution with the following commands:
A = {{1, 2}, {2, 4}}
b = {3, 6}
Solve[A.{x, y} == b]
{{y -> 3/2 - x/2}}
However, if we take another vector b = [−3, 6]T. Obviously, the new system of equations
defines two parallel lines on x-y-plane that do not intersect, meaning there is no solut. Therefore, system (1.2) has no solution. We check whether Mathematica has a capability to determine this conclusion.
b2 = {-3, 6}
x2 = LinearSolve[A, b2]
Out[10]= LinearSolve::nosol: Linear equation encountered that has no solution.
Therefore, the system has no solution for this vector b2
because matrix rank
MatrixRank[A]
Out[12]= 1
is 1, which does not match the dimensions 2×2 of the given matrix.
■
Example 2:
Let us consider the singular matrix \( {\bf A} = \begin{bmatrix} 2&\phantom{-}1&2 \\ 0&\phantom{-}1&0 \\ 1&-2& 1 \end{bmatrix} .\)
We want to determine when the linear equation \( {\bf A} \, {\bf x} = {\bf b} \) has a solution; namely, we need to find conditions on the vector b so the equation \( {\bf A} \, {\bf x} = {\bf b} \) has a solution. We use R to answer this question.
The command rank provides the rank of a matrix A.
We will need the latter command when matrix A would have complex
entries.
x = NullSpace[Atranspose]
Out[6]= {{-1, 5, 2}}
So we get the null space of the transpose matrix to be spanned on the vector
[-1, 5, 2 ]. The system of linear equations Ax = b has a solution
(in this case, we say the the given system is consistent) if and only if
the vector b is orthogonal to every vector from the kernel of the
transpose (in general adjoint) matrix. In other words, its dot product with
each such vector must be zero. So we ask Mathematica to do this job:
b = {b1, b2, b3};
Dot[x, y] == 0
Out[8]= {-b1 + 5 b2 + 2 b3} == 0
x.y == 0
Of course, we can achieve the same result by solving the system of linear
equations Ax = b directly using Gaussian elimination method.
Mathematica does not provide this algorithmitically fastest way to solve a linear algebraic equation; instead it uses Gauss--Jordan elimination procedure, which is more computationally demanded (and practically is not used). In this case, Mathematica offers a special command to achieve this: RowReduce[A].
■
A linear system of algebraic equations is said to be consistent if it has at least one solution and inconsistent if it has no solution.
Thus, a consistent linear system has either one solution or infinitely many solutions---there are no other possibilities.
Theorem 1: Let A be an m × n matrix.
(Overdetermined case) If m > n, then the linear system \( {\bf A}\,{\bf x} = {\bf b} \)
is inconsistent for at least one vector b in \( \mathbb{R}^n . \)
(Undetermined case) If m < n, then for each vector b in \( \mathbb{R}^m \)
the linear system \( {\bf A}\,{\bf x} = {\bf b} \) is either inconsistent or has infinite many solutions.
Theorem 2: If A is an \( m \times n \) matrix with
\( m < n, \) then \( {\bf A}\,{\bf x} = {\bf 0} \) has
infinitely many solutions.
Theorem 3: A linear system of algebraic equations \( {\bf A}\,{\bf x} = {\bf b} \) is consistent if and only if the vector b is orthogonal
to every solution y of the adjoint homogeneous equation
A*y = 0, where A* is the adjoint matrix to
A. If A is real, then the adjoint matrix is just transposed and the statement is equivalent to
\( {\bf A}^{\mathrm T} \,{\bf y} = {\bf 0} \) or \( {\bf y}^T {\bf A} = {\bf 0} . \)
Theorem 3 provides us an algorithm of determination whether a system of equations is consistent or not.
The kernel
of a matrix A (which considered as a linear map), also known as
the null spacenullspace, is the set of
vectors v such that Av = 0 (so matrix A
maps into zero).
The cokernel
of a matrix A is the left null space, of a matrix A
that consists of all column vectors y such
that A*y = 0
or yTA = 0T when A is real, where T
denotes the transpose of a matrix. The left null space of A is
the same as the kernel of A* (or AT
if A is real).
Example 3:
We consider a singular 3×3 matrix and 3-column vector b
Theorem 4: The general solution x of a consistent linear system of algebraic equations
\( {\bf A}\,{\bf x} = {\bf b} \) can be expressed as the sum of a particular solution
x0 of that system and the general solution of the corresponding homogeneous system
Here vector \( \{ {\bf v}_1 , \ldots , {\bf v}_k \} \) is a basis for the kernel
(= null space) of A and \( c_1 , \ldots , c_k \) are arbitrary constants.
Theorem 5: Let A be an \( m \times n \) matrix. A linear system of equations
\( {\bf A}\,{\bf x} = {\bf b} \) is consistent if and only if the rank of the coefficient matrix A is the same as the rank of the augmented matrix \( \left[ {\bf A}\,\vert \, {\bf b} \right] . \) ▣
The actual use of the term augmented matrix appears to have been introduced by the American mathematician Maxime BĂ´cher (1867--1918) in 1907.
as you can easily verify by substituting these values into the equations. Every equation is satisfied for these values of \( x_1 , \ x_2 , \ x_3 , \ x_4 , \ x_5 . \)
However, there is not the only solution to this system of equations. There are many others.
has no solution. There are no numbers we can assign to the unknowns
\( x_1 , \ x_2 , \ x_3 , \ x_4 , \ x_5 \) so that all four equations are satisfied.
We can solve these two equations simultaneously by considering the augmented matrix. An efficient way to do this is to
form the partitioned matrix \( \left[ {\bf A} \, \vert \, {\bf b}_1 \, \vert {\bf b}_2 \right] \)
in which the coefficient matrix A is "augmented" by all 2 vectors. Then this augmented matrix is reduced using
Gauss--Jordan elimination to
The sixth column gives the required solution for the former system of equations. The last row in reduced form shows
that the latter system of equations has no solution.
■
Equations in blocks
Suppose that
we have the linear system \( {\bf M}\,{\bf z} = {\bf r} , \) in which the given coefficient matrix M is partitioned
into 2 by 2 blocks: \( {\bf M} = \begin{bmatrix} {\bf A} & {\bf B} \\ {\bf C} & {\bf D} \end{bmatrix} , \)
where A and D are square invertible matrices.
Suppose that the right-hand side given vector is also broken as r =
[ pq ]T,
while column vector z = [ xy ]T is unknown. The system is
equivalent to the pair of vector equations:
where \( {\bf M} / {\bf D} = {\bf A} - {\bf B}\,{\bf D}^{-1} {\bf C} \) is the Schur complement.
Similarly, if A is nonsingular, eliminating x and solving for y yields
Solve[{A x + B y == p, CC x + DD y == q}, {x, y}]
{{x -> -((DD p - B q)/(B CC - A DD)),
y -> -((-CC p + A q)/(B CC - A DD))}}
where \( {\bf M} / {\bf A} = {\bf D} - {\bf C}\,{\bf A}^{-1} {\bf B} \) is called the Schur complement. An important use of Schur complements is the partitioned solution of linear systems. In fact this
is the most important application in Finite Element Method (FEM for short), in connection with superelement analysis.
Example 6:
Let us consider a linear algebraic equation Mz = r with two input vectors r and b, where
where the partition of the given matrix M into submatrices A, B,C, and D can be chosen in four basic different ways. Of course, there are many other choices because the order of equations does not matter, but we stick with these four choices to demonstrate the method (due to Issai Schur). We consider every case separately.
Four-dimensional matrix A
We consider the case when matrix A is a 4×4 matrix:
First, we check whether our square matrices are not singular:
A3 = {{2, 5, -2}, {3, 4, -2}, {1, 3, 1}};
Det[A3]
-15
D3 = {{-2, 1}, {3,1}};
Det[D3]
-5
Therefore, it is possible to separate the given 5×5 system of equations into two separate systems of sizes 3×3 and 2×2.
The corresponding Schur complements are
Since vectors (6.2.3), generated by components p and q of the vector r = [p, q]T, are not orthogonal (their dot products are not zero) to vectors (6.2.5), we claim that system Mz = r is inconsistent. On the other hand, vectors (6.2.4), generated by components p = [1, 7]T and q = ]3, 4, 5]T of the vector b = [p, q]T, are orthogonal (their dot products are zeroes) to vectors (6.2.5), and so we claim that system Mz = b is consistent.
One-dimensional matrix A
Finally, let us consider the case when matrix A is choisen as a 1×1 matrix:
So we conclude that such separation into 1-dimensional and 4-dimensional systems does not lead to an equivalent systems of algebraic equations.
■
R has a dedicated command to solve linear systems of equations
\( {\bf A}\,{\bf x} = {\bf b} , \quad \det{\bf A} \ne 0 . \) This means that you can find a solution only for square nonsingular matrix A.
Since the code above requires a package, it will run not directly here (copy onto your own R to run). The following is the output you should expect:
Next, we utilize anothe rpackage plotly for creating interactive web-based graphs via the open source JavaScript:
Rgl is another package that is useful for 3d plotting. It is interactive;
however, mac users must download XQuartz to use the rgl package: https://www.xquartz.org
■
Example:
We plot a torus with the follwoing commands: