Theorem (Kronecker--Capelli):
A system of linear algebraic equations A x = b has a solution if and only if the matrix A has the same
rank as the augmented matrix \( \left[ {\bf A} \,|\,{\bf b} \right] . \) ■
It is obvious that the rank of the augmented matrix \( \left[ {\bf A} \,|\,{\bf b} \right] \)
is greater than or equal to the rank of the matrix A. They are equal if and only if b is a linear
combination of the columns of the matrix A. The last condition is equivalent to the
statement that there exists a vector x such that A x = b. ■
Example 1:
■
End of Example 1
Theorem (Fredholm matrix Theorem):
A system of linear algebraic equations A x = b has a solution if and only if b is orthogonal to
every solution of the homogeneous equation z A = 0, that is, their inner product is zero:
\( {\bf z} \cdot {\bf b} = 0 . \) In other words, the input vector b must be
orthogonal to every solution of the homogeneous adjoint equation \( {\bf A}^{\ast} {\bf y} = {\bf 0} , \)
which means that \( {\bf y} \perp {\bf b} \quad \Longleftrightarrow \quad {\bf y} \cdot {\bf b} =0 . \)
■
Sufficiency Proof: Let r be the rank of matrix A. We can assume without any loss of generality
that the first r rows of the matrix A are linearly independent. Then obviously the first r rows
of the augmented matrix will be also linearly independent too. If the k-th row of the matrix A is
a linear combination of the first r rows of A, then there exists a nonzero vector z such that
z A = 0. According to hypothesis of theorem, \( {\bf z} \cdot {\bf b} = 0 . \) This
implies that the k-th row of the augmented matrix \( \left[ {\bf A} \,|\,{\bf b} \right] \)
is a linear combination of the first r rows of the augmented matrix. So ranks of A and the augmented matrix
are the same. According to Kronecker-Capelli theorem, the linear system of equations A x = b is consistent.
Necessity Proof: Suppose that the linear system A x = b has a solution. Then for every
\( {\bf z} \in \mathbb{C}^m \) we have the equality
Example 2:
The span of the empty set \( \varnothing \) consists of a unique element 0.
Therefore, \( \varnothing \) is linearly independent and it is a basis for the trivial vector space
consisting of the unique element---zero. Its dimension is zero.
■
End of Example 2
Example: In \( \mathbb{R}^n , \) the vectors
\( e_1 [1,0,0,\ldots , 0] , \quad e_2 =[0,1,0,\ldots , 0], \quad \ldots , e_n =[0,0,\ldots , 0,1] \)
form a basis for n-dimensional real space, and it is called the standard basis. Its dimension is n.
Example: Let us consider the set of all real \( m \times n \)
matrices, and let \( {\bf M}_{i,j} \) denote the matrix whose only nonzero entry is a 1 in
the i-th row and j-th column. Then the set \( {\bf M}_{i,j} \ : \ 1 \le i \le m , \ 1 \le j \le n \)
is a basis for the set of all such real matrices. Its dimension is mn.
Example: The set of monomials \( \left\{ 1, x, x^2 , \ldots , x^n \right\} \)
form a basis in the set of all polynomials of degree up to n. It has dimension n+1.
■
A vector space is called finite-dimensional if it has a basis consisting of a finite
number of elements. The unique number of elements in each basis for V is called
the dimension of V and is denoted by dim(V). A vector space that is not finite- dimensional is called
infinite-dimensional.
The next example demonstrates how Mathematica can determine the basis or set of linearly independent
vectors from the given set. Note that basis is not unique and even changing the order of vectors, a software can provide
you another set of linearly independent vectors.
Example: Suppose we are given four linearly dependent vectors:
MatrixRank[m =
{{1, 2, 0, -3, 1, 0},
{1, 2, 2, -3, 1, 2},
{1, 2, 1, -3, 1, 1},
{3, 6, 1, -9, 4, 3}}]
Out[1]= 3
Then each of the following scripts determine a subset of linearly independent vectors:
m[[ Flatten[ Position[#, Except[0, _?NumericQ], 1, 1]& /@
Last @ QRDecomposition @ Transpose @ m ] ]]
In m1 you see 1 row and n columns together,so you can transpose it to see it as column
{{1, 1, 1, 3}, {0, 1, 0, 1}, {1, 1, 2, 4}}
One can use also the standard Mathematica command: IndependenceTest.
■
Fredholm's finite-dimensional alternative
First, we need to recall some definitions.
Let f : X ⇾ Y be a linear mapping of vector space X into another vector space Y, both are over the same field 𝔽.
Its coimage and cokernel are defined as
\[
\mbox{coimage} \left( f \right) = \mbox{coim} \left( f \right) = X / \mbox{ker}(f)
\]
and
\[
\mbox{coker} \left( f \right) = Y / \mbox{im}\left( f \right) ,
\]
respectively.
There exists a chain of linear mappings, which "partition f:"
\[
\mbox{ker} \left( f\right) \stackrel{i}{\longrightarrow} X \stackrel{\sigma}{\longrightarrow} \mbox{coimage} \left( f \right) \stackrel{h}{\longrightarrow} \mbox{im} \left( f \right) \stackrel{j}{\longrightarrow} Y \stackrel{g}{\longrightarrow} \mbox{coker} \left( f \right) ,
\]
where all mappings, except h, are canonical insertions and factorizations, while h
is the only mapping that completes the commutative diagram
It is unique, because ker(c) = kerf, and it is an isomorphism, because the inverse mapping also exists and is defined uniquely.
The point of uniting these spaces into pairs (with and without the prefix "co")
is explained in the theory of duality.
Let f : X ⇾ Y be a linear mapping of vector space X into another vector space Y, both are over the same field 𝔽.
The difference of dimensions
\[
\mbox{ind}\left( f \right) = \dim\,\mbox{coker}}\left( f \right) - \dim\,\mbox{ker}}\left( f \right)
\]
is called the index of the operator/transformation f.
In particular, if dimX = dimY = n < ∞, then
for any linear operator f on X, indf = 0. This implies the so-called Fredholm alternative:
Theorem (Fredholm matrix Theorem):
A system of linear algebraic equations A x = b has a solution if and only if b is orthogonal to
every solution of the homogeneous equation z A = 0, that is, their inner p
either the equation g(x) = y is solvable for all y and then the equation g(x) = 0
has only zero solutions; or
this equation cannot be solved for all y and then the homogeneous equation
g(x) = 0 has non-zero solutions.
More precisely, if ind g = 0, then the dimension of the space of solutions of the
homogeneous equation equals the codimension of the spaces on the right hand sides
for which the inhomogeneous equation is solvable.