A basis β for a vector space V is a linearly independent subset of V
that generates
or span V. If β is a basis for V, we also say that elements of β form a basis for
V. This means that every vector from V is a finite linear combination of elements from the basis.
Unimodular Matrices
A unimodular matrix is a square integer matrix M ∈ ℤn×n with determinant +1 or −1.
Every equation M x = b, where M and b both have integer components and M is unimodular, has an integer solution. The n × n unimodular matrices form a group called the n × n general linear group over ℤ, which is denoted GLn ( ℤ ).
Unimodular matrices form a subgroup of the general linear group under matrix multiplication. In particular, the identity matrix is an unimodular matrix.
Other examples include Pascal matrices and permutation matrices.
There are an infinite number of 3×3 unimodular matrices not containing any 0s or ±1. One parametric family is
\[
\begin{bmatrix} 8 n \left( n + 1 \right) & 2n +1 & 4n \\ 4n \left( n + 1 \right) & n+1 & 2n +1 \\ 4 n^2 + 4n + 1 & n & 2n -1 \end{bmatrix} , \qquad n \in \mathbb{Z}.
\]
A = {{8*n*(n + 1), 2*n + 1 , 4*n}, {4*n*(n + 1), n + 1,
2*n + 1}, {4*n*n + 4*n + 1, n, 2*n - 1}};
Det[A]
Example: The set of monomials \( \left\{ 1, x, x^2 , \ldots , x^n \right\} \)
form a basis in the set of all polynomials of degree up to n. It has dimension n+1.
■
Example: The infinite set of monomials \( \left\{ 1, x, x^2 , \ldots , x^n , \ldots \right\} \)
form a basis in the set of all polynomials.
■
Theorem: Let V be a vector space and
\( \beta = \left\{ {\bf u}_1 , {\bf u}_2 , \ldots , {\bf u}_n \right\} \) be a subset of
V. Then β is a basis for V if and only if each vector v in V can be uniquely
decomposed into a linear combination of vectors in β, that is, can be uniquely expressed in the form
If the vectors \( \left\{ {\bf u}_1 , {\bf u}_2 , \ldots , {\bf u}_n \right\} \)
form a basis for a vector space V, then every vector in V can be uniquely expressed in the form
for appropriately chosen scalars \( \alpha_1 , \alpha_2 , \ldots , \alpha_n . \) Therefore,
v determines a unique n-tuple of scalars
\( \left[ \alpha_1 , \alpha_2 , \ldots , \alpha_n \right] \) and, conversely, each
n-tuple of scalars determines a unique vector \( {\bf v} \in V \) by using the entries
of the n-tuple as the coefficients of a linear combination of \( {\bf u}_1 , {\bf u}_2 , \ldots , {\bf u}_n . \)
This fact suggests that V is like the n-dimensional vector space
\( \mathbb{R}^n , \) where n is the number of vectors in the basis for V.
Theorem: Let S be a linearly independent subset of a vector space V,
and let v be an element of V that is not in S. Then
\( S \cup \{ {\bf v} \} \) is linearly dependent if and only if v
belongs to the span of the set S.
Theorem: If a vector space V is generated by a finite set S, then
some subset of S is a basis for V ■
A vector space is called finite-dimensional if it has a basis consisting of a finite
number of elements. The unique number of elements in each basis for V is called
the dimension of V and is denoted by dim(V). A vector space that is not finite- dimensional is called
infinite-dimensional.
The next example demonstrates how Mathematica can determine the basis or set of linearly independent
vectors from the given set. Note that basis is not unique and even changing the order of vectors, a software can provide
you another set of linearly independent vectors.
Example: Suppose we are given four linearly dependent vectors:
MatrixRank[m =
{{1, 2, 0, -3, 1, 0},
{1, 2, 2, -3, 1, 2},
{1, 2, 1, -3, 1, 1},
{3, 6, 1, -9, 4, 3}}]
Out[1]= 3
Then each of the following scripts determine a subset of linearly independent vectors:
m[[ Flatten[ Position[#, Except[0, _?NumericQ], 1, 1]& /@
Last @ QRDecomposition @ Transpose @ m ] ]]
In m1 you see 1 row and n columns together,so you can transpose it to see it as column
{{1, 1, 1, 3}, {0, 1, 0, 1}, {1, 1, 2, 4}}
One can use also the standard Mathematica command: IndependenceTest.
■