Connections are links between people, objects, images, words or memories. They comprise a large part of everyday life. Relations between people, or within a community, are an important object of study in all sciences, including economics, psychology, medicine and the sciences of individual and collective behavior. Decisions arise from the individual selecting which choices to make, the order of those choices and the constraints bearing on them, among a set of alternative actions. To do this he or she relates objects, people, pieces of information and knowledge.

We need a definition of binary relation that will be used not only in this section, but also in many others. Let A and B be non-empty sets. The Cartesian or direct product of the sets A and B, denoted A × B, is defined as the set of ordered pairs (𝑎, b), with 𝑎 being element of A and b being element of B; in symbols

\[ A \times B = \left\{ (a, b)\ : \ a \in A \ \mbox{ and }\ b\in B \right\} . \]
The Cartesian product A × A is also denoted A² and is the set of ordered couples of elements of A.
A relation, or correspondence R over a set X can be seen as a set of ordered pairs (x, y) of members of X. The relation R holds between x and y if (x, y) is a member of R ⊆ X × X.

Example 1: Let A be the set of cars sold in 2024 in New England area (the US). Let group the elements of A by manufacturer, model and engine displacement. The following binary relations in A are defined: two hatchback cars 𝑎 and b are in the relation RH, 𝑎RHb; the cars c and d having the same engine displacement 1.8 L are in the relation R1.8, cR1.8d.
End of Example 1

A binary relation R in a set X can satisfy particular properties.

A binary relation R, on a set X is said to be an equivalence relation, if it is reflexive, symmetric and transitive. That is, for all 𝑎, b, and c in X:
  • 𝑎 R 𝑎 (reflexivity).
  • 𝑎 R b if and only if b R 𝑎 (symmetry).
  • If 𝑎 R b and b R c, then 𝑎 R c (transitivity).
X together with the relation R is called a setoid. An equivalent relation is usually denoted by ∼ or ≈, which relates elements as 𝑎 ∼ b or 𝑎 ≈ b

Example 2: The parallelism relation between lines of a plane is a reflexive relation (each line is parallel to itself), symmetric (if the line r is parallel to the line s, then s is parallel to r), transitive (if r is parallel to s and if s is parallel to the line t, then r is parallel to t) and is not antisymmetric.

On the other hand, the perpendicularity relation between lines of the plane is not reflexive, it is symmetric, it is not transitive, it is not antisymmetric, nor negatively transitive (because if r is not perpendicular to s and s is not perpendicular to t, it does not mean that r is not perpendicular to t).

End of Example 2
If R is an equivalence relation in X and x is an element of X, we call equivalence class (with respect to the relation R) determined by x, or equivalence class of x, the set of the elements of X that are equivalent to x. The equivalence class of x is denoted by R[x] or just [x]. In symbols: \[ [\,x\,] = \left\{ a \in X \ : \ a \approx x \right\} . \]

Example 3: Let us consider the individuals of the same age rounded to the nearest whole year (in mathematics and computer science, it is called floor) in a population. We have thus defined a relation C that is an equivalence relation. Relation C divides the population into groups of people of the same age (rounded down to the nearest integer): the kids who are not yet one year old, those who are one year (and not yet two), those who are two, and so on, up to individuals of one hundred and fifty years, hopefully. How many equivalence classes are there with respect to C? There are at most one hundred and fifty-one.
End of Example 3

Theorem 1: If b belongs to the equivalence class [𝑎] with respect to the relation ≈, then [𝑎] = [b] (i. e., the equivalence class of 𝑎 is equal to the equivalence class of b).

If we prove that the set [𝑎] is contained in [b] and also [b] is contained in [𝑎], then we will have proved the theorem. We begin by proving that [𝑎] ⊆ [b], i. e., if an element belongs to the class [𝑎], then it belongs to [b]. Indeed, if c ∈ [𝑎], then c ≈ 𝑎 ≈ b and therefore by the transitivity of the relation ≈, we have cb, i. e., c ∈ [b]. Similarly, we show that if an element belongs to [b], then it is also in [𝑎].
    
Example 4: We consider the set of all integers. Two integers are equivalent is they have the same reminder upon division by 5. With this equivalence relation, we separate the set of integers ℤ into five disjoint classes, called congruence modulo 5.
End of Example 4

Theorem 2: If two classes are not equal, then they are disjoint.

We prove that if two classes are not equal, then they do not have elements in common. Let c be an element of the class [𝑎] that does not belong to the class [b]. If, by contradiction, there were an element d in common to the classes [𝑎] and [b], then c ≈ 𝑎 ≈ d ≈ b and by transitivity c would be equivalent to b, which contradicts the hypothesis that c is not in [b]. 
    
Example 5: Let X be the set of ordered pairs of integers (𝑎, b) with non-zero b, and define an equivalence relation ∼ , on X such that (𝑎, b) ∼ (c , d) if and only if 𝑎d = b c, then the equivalence class of the pair (𝑎, b) can be identified with the rational number 𝑎/b, and this equivalence relation and its equivalence classes can be used to give a formal definition of the set of rational numbers. The same construction can be generalized to the field of fractions of any integral domain.
End of Example 5

 

Row equivalent Matrices


We remind elementary row operations from Part 1 of this tutorial.
An elementary row operation on a matrix M is any one of the following three ways of modifying M to produce a new equivalent matrix.
  1. Interchanging the positions of two rows. We abbreviate it as Ri ↕ Rj.
  2. Multiplying one row of the matrix by a nonzero scalar. This operation is denoted as Ric·Ri for row i.
  3. Adding a constant multiple of a row to another row. The following notation is used: Ri ← Ri + c·Rj.
Based on this definition, we can define a row-equivalent relation between matrices.
Matrices A and B are row equivalent (abbreviated by \( \displaystyle {\bf A}\,\underset{R}{\sim} \,{\bf B} \) or \( \displaystyle {\bf A}\,\sim_R \,{\bf B} \) ), if B is obtained from A by a finite sequence of elementary row operations.

Theorem 3: A matrix A ∈ 𝔽m×n is row equivalent to matrix B ∈ 𝔽m×n if and only if there exists a non-singular matrix P ∈ 𝔽m×m such that P A = B.

    
Example 6:
End of Example 6
Although elementary row operations preserve thr row space of the matrix, we observe that typically equivalent matrices have different column spaces. They do not, however, have different null spaces.

Lemma 1: Row equivalent matrices have identical null spaces.

Let A belong to 𝔽m×n and let P the product of m × m elementary matrices over 𝔽. Since P is invertible, A u = 0 if and only if \[ {\bf P}\,{\bf A}\,{\bf u} = {\bf P}\,{\bf 0} = {\bf 0} . \] This is enough to prove the lemma.
   
Example 7:
End of Example 7

 

Column equivalent Matrices


We remind elementary column operations from Part 1 of this tutorial.
An elementary column operation on a matrix A ∈ 𝔽m×n is an operation of one of the following three types:
  1. Column i and j are interchanged (denoted by Ci ↔ Cj.
  2. Multiplying one column of the matrix by a nonzero scalar. This operation is denoted as Cic·Ci for column i.
  3. Adding a constant multiple of a column to another column. The following notation is used: Ci ← Ci + c·Cj.
Based on this definition, we can define a column-equivalent relation between matrices.
Matrices A and B are column equivalent (abbreviated by \( \displaystyle {\bf A}\,\underset{C}{\sim} \,{\bf B} \) or \( \displaystyle {\bf A}\,\sim_C \,{\bf B} \) ), if B is obtained from A by a finite sequence of elementary column operations.

Theorem 4: A matrix A ∈ 𝔽m×n is column equivalent to matrix B ∈ 𝔽m×n if and only if there exists a non-singular matrix Q ∈ 𝔽n×n such that A Q = B.

   
Example 8:
End of Example 8
Row equivalent matrices have the same rank, but typically they have different column spaces. In our case, the column space of matrix C is span{(1, 1)}, while the column space of matrix D is spab{(1, 0)}. Observe that typically column equivalent matrices have different row spaces. However, they do preserve column spaces, and therefore have the same range. Hence, the column equivalent matrices have the same rank (which is the dimension of the range).

Lemma 2: Column equivalent matrices have identical column spaces.

Let A belong to 𝔽m×n and let P the product of m × m elementary matrices over 𝔽. Since P is invertible, A u = 0 if and only if \[ {\bf P}\,{\bf A}\,{\bf u} = {\bf P}\,{\bf 0} = {\bf 0} . \] This is enough to prove the lemma.
   
Example 8:
End of Example 8

Equivalent Matrices

In part 1 of this tutorial, we discussed two particular examples of equivalence relations between matrices caused by elementary row operations and elementary column operations (see section). Now time comes to unite these two elementary operations together.
An m-by-n matrix A is equivalent to another m × n matrix B (abbreviated as AB) if B is obtained by a finite sequence of elementary operations, either row or column.
Note that both row equivalence and column equivalence are special cases of equivalence. Now we come to a property that characterize equivalence relation between matrices.

Theorem 5: A matrix A is equivalent to matrix B if and only if there exist two nonsingular square matrices P ∈ 𝔽m×m and Q ∈ 𝔽n×n such that P A = B Q.

This statement follows from theorems 4 & 5 of section on row operations. The matrix P in this theorem represents the row operations used while Q represents the column operations.

Also we know from section on changing bases that performing row operations is equivalent to changing the basis used to represent vectors in the image and performing column operations (which is the same as multiply by Q−1) is equivalent to changing the basis used to represent vectors in the domain.

Theorem 6: Let V and W be vector spaces with dim(V)= n and dim(W) = m, respectively. Then two m × n matrices A and B are equivalent if and only if they represent the same linear transformation T : VW, but possibly with respect to different ordered bases. In this case, A and B represent exactly the same set of linear transformations in ℒ(V, W).

If A and B represent a linear transformation T ∈ ℒ(V, W), that is, if \[ {\bf A} = [\![ T ]\!]_{\alpha \to \beta} \qquad \mbox{and} \qquad {\bf B} = [\![ T ]\!]_{\gamma \to \delta} \] for ordered bases α, β, γ, and δ. In Part3 of this tutorial, we showed that matrices A and B are equivalent. Now we prove the opposite statement and assume that Now A and B are equivalent, say \[ {\bf B} = {\bf P}\,{\bf A}\,{\bf Q}^{-1} , \] where P and Q are invertible. Suppose also that A represents a linear transformation T ∈ ℒ(V, W) for some ordered bases α and β, that is, \[ {\bf A} = [\![ T ]\!]_{\alpha \to \beta} \] There is a unique ordered basis γ for V for which Q = idα ← γ and a unique ordered basis δ for W for which P = idα ← γ. Hence, \[ {\bf B} = [\mbox{id}]_{\beta \to \delta} [\![ T ]\1[_{\alpha \to \beta} [\mbox{id}]_{\gamma \to \alpha} = [\![ T ]\!]_{\gamma \to \delta} \] So B also represents T. By symmetry, we see that A and B represent the same set of linear transformations. This completes the proof.
    
Example 9:
End of Example 9

Left-multiplication by the reduction (elementary) matrices making up P performs row operations. Right-multiplication by the reduction matrices making up Q performs column operations. Hence, matrix equivalence is a generalization of row equivalence — two matrices are row equivalent if one can be converted to the other by a sequence of row reduction steps, while two matrices are matrix equivalent if one can be converted to the other by a sequence of row reduction steps followed by a sequence of column reduction steps. Consequently, if matrices are row equivalent then they are also matrix equivalent since we can take Q to be the identity matrix. The converse, however, does not hold: two matrices can be matrix equivalent but not row equivalent. For instance, matrices

\[ {\bf A} = \begin{bmatrix} 1&0 \\ 0&0 \end{bmatrix} \qquad \mbox{and} \qquad {\bf B} = \begin{bmatrix} 1&2 \\ 0&0 \end{bmatrix} \]
are equivalent because the second matrix is reduced to the first by the column operation. Moreover, column space of matrix A is the same as the column space of matrix B. Also, rank(A) = rank(B). However, they are not row equivalent because they have different reduced row echelon forms (both are already in reduced form).

On the other hand, matrices

\[ {\bf C} = \begin{bmatrix} 3&1 \\ 3&1 \end{bmatrix} \qquad \mbox{and} \qquad {\bf D} = \begin{bmatrix} 3&1 \\ 0&0 \end{bmatrix} \]
are row equivalent because it takes only one row operation to reduce C to D and rank(C) = rank(D). Row equivalent matrices have the same rank, but typically they have different column spaces. In our case, the column space of matrix C is span{(1, 1)}, while the column space of matrix D is spab{(1, 0)}.

Theorem 7: Every m × n matrix A is equivalent to a unique matrix of the form \[ {\bf A} \,\sim\, \begin{bmatrix} {\bf I}_r & {\bf 0} \\ {\bf 0} & {\bf 0} \end{bmatrix} , \] where r is the rank of the matrix A, that is, the number of nonzero rows in P A.

Let A belong to 𝔽m×n. Take an ordered basis for 𝔽n, α = [v1, v2, … , vn], where last terms [vr+1, vr+2, … , vn] constitute a basis for the null space of A (which we denote by ker(A)). The first r elements of α form a basis for the column space of A. Let β = [A v1, A v2, … , A vr, wr+1, … , wm] be an ordered basis for 𝔽m. The matrix representation ⟦TAα→β then has the form given in the statement of the theorem. Since A is the matrix representation of T with respect to the natural bases of 𝔽n and 𝔽m, A and ⟦TAα→β are equivalent.
   
Example 10:
End of Example 10
Matrices A and B in 𝔽m×n are rank equivalent if rank A = rank B.
Now we can stop saying matrices are equivalent and instead say they are rank equivalent. Two matrices may have the same rank, but it does not mean they are row-equivalent. For simple counterexample, consider A = [1 0] and B = [0 1]. Both of these matrices have rank 1, but are not row-equivalent because they are already in reduced row echelon form.

to be checked ??????????????????????????????

Theorem ??: If a sequence of elementary operations on matrix A ∈ 𝔽m×n applied to \[ \begin{bmatrix} {\bf A} & {\bf I}_m \\ {\bf I}_n & {\bf 0}_{n\times m} \end{bmatrix} \qquad \mbox{yields} \qquad \begin{bmatrix} {\bf B} & {\bf P} \\ {\bf Q} & {\bf 0} \end{bmatrix} , \] then P A Q = B.

Example 11: Let us consider two matrices \[ {\bf A} = \begin{bmatrix} \phantom{-}2&-3&\phantom{-}6&\phantom{-}2&\phantom{-}5 \\ -2&\phantom{-}3&-3&-3&-4 \\ \phantom{-}4&-6&\phantom{-}9&\phantom{-}5&\phantom{-}9 \\ -2&\phantom{-}3&\phantom{-}3&-4&\phantom{-}1 \end{bmatrix} , \quad {\bf B} = \begin{bmatrix} \phantom{-}22& -16& -2& \phantom{-}1& \phantom{-}3 \\ -69& \phantom{-}77& \phantom{-}1& -4& -19 \\ -228& 238& \phantom{-}6& -13& -57 \\ \phantom{-}47& -61& \phantom{-}1& \phantom{-}3& \phantom{-}16 \end{bmatrix} . \]
A = {{2,-3,6,2,5},{-2,3,-3,-3,-4}, {4,-6,9,5,9},{-2,3,3,-4,1}} B = {{22, -16, -2, 1, 3}, {-69, 77, 1, -4, -19}, {-228, 238, 6, -13, -57}, {47, -61, 1, 3, 16}}
Matrices A and B are equivalent because \[ {\bf B} = {\bf P}\,{\bf A}\,{\bf Q} , \] where \[ {\bf P} = \begin{bmatrix} \phantom{-}1& \phantom{-}1& \phantom{-}0& -2 \\ -2& \phantom{-}3& \phantom{-}1& \phantom{-}0 \\ -2& \phantom{-}0& -3& \phantom{-}4 \\ -1& -2& \phantom{-}1& \phantom{-}2 \end{bmatrix} , \quad {\bf Q} = \begin{bmatrix} 1& 0& \phantom{-}0& \phantom{-}5& 1 \\ 0& \phantom{-}1& \phantom{-}0& \phantom{-}2& 0 \\ 0& \phantom{-}0& \phantom{-}1& \phantom{-}0& 0 \\ 3& -2& \phantom{-}0& -1& 0 \\ 3& -4& -1& 0& 1 \end{bmatrix} . \]
P = {{1, 1, 0, -2}, {-2, 3, 1, 0}, {-2, 0, -3, 4}, {-1, -2, 1, 2}} Q = {{1, 0, 0, 5, 1}, {0, 1, 0, 2, 0}, {0, 0, 1, 0, 0}, {3, -2, 0, -1, 0}, {3, -4, -1, 0, 1}} P.A.Q
{{22, -16, -2, 1, 3}, {-69, 77, 1, -4, -19}, {-228, 238, 6, -13, -57}, {47, -61, 1, 3, 16}}
Mathematica confirms that matrices P and Q atr nonsingular.
Det[P]
4
Det[Q]
3
End of Example 11

Example 12: We consider a 3×4 matrix \[ {\bf A} = \begin{bmatrix} 1&1&0&-2 \\ 2&0&2&2 \\ 4&1&3&1 \end{bmatrix} . \]
End of Example 12

 

Matrices as Operators


Let T : VW be a linear transformation of finite dimensional vector space V into another finite dimensional vector space W, both over the same field 𝔽. Let α be an ordered basis of V and β be an ordered basis of W. Then this transformation defines the matrix A = ⟦Tα→β ∈ 𝔽m×n. In particular, if V = 𝔽n with standard basis and W = 𝔽m with another standard basis, then matrix A upon multiplication from left maps 𝔽n×1 into 𝔽m×1.

An alternative point of view is also valid. Every m-by-n matrix A ∈ 𝔽m×n can be considered as an operator A : 𝔽n×1 ⇾ 𝔽m×1 acting on column vectors upon multiplication from left, 𝔽m×1y = A x, where x ∈ 𝔽n×1. This operator is naturally extended for Cartesian products, which we denote as TA : 𝔽n ⇾ 𝔽m. The parallel lives of matrices and linear transformations extend to the equivalence relations we have defined on 𝔽m×n.

Linear transformations L₁ and L₂ in ℒ(V, W) are rank equivalent when rank(L₁) = rank(L₂). In other words, their matrix representations in any bases have the same rank.

Lemma 3: If V ≌ 𝔽n and W ≌ 𝔽m, then L₁ and L₂ in ℒ(V, W) are rank equivalent if and only if their matrix representations in 𝔽m×n are rank equivalent.

   
Example 13:
End of Example 13

Similar Matrices

The material in this section so far demonstrates one of the worst uses of the word “equivalent” in all of mathematics. Its particular case of square matrices uses another label.
Square matrices A and B in 𝔽n×n are similar provided there is an invertible matrix P ∈ 𝔽n×n such that B P = P A. Similarity of matrices is usually abbreviated as AS B or just AB. The equivalence classes associated with similarity are called similarity classes.
Similarity is more selective than rank equivalence, in the following sense. While similar matrices are rank equivalent, rank equivalent matrices are not necessarily similar. This stands to reason if we think about the roles of P and Q when we consider rank equivalent matrices A and B in 𝔽n×n, with B Q = P A. The only requirement for P and Q is that they be invertible and have the correct dimensions. When square matrices A and B are similar, we still have B Q = P A, for invertible P and Q, but now Q = P. This makes similarity more restrictive.

Theorem 8: Let V be a vector space of dimension n. Then two n × n matrices A and B are similar if and only if they represent the same linear operator T : VW, but possibly with respect to different ordered bases. In this case, A and B represent exactly the same set of linear operators in ℒ(V, W).

   
If A and B represent a linear transformation T ∈ ℒ(V), that is, if \[ {\bf A} = [\![ T ]\!]_{\alpha} \qquad \mbox{and} \qquad {\bf B} = [\![ T ]\!]_{\beta} , \] for ordered bases α and β, then A and B are similar. Now suppose that A and B are similar, say \[ {\bf B}\,{\bf P} = {\bf P}\,{\bf A} . \] Suppose also that A represents a linear operator T ∈ ℒ(V) for some ordered basis α, that is, \[ {\bf A} = [\![ T ]\!]_{\alpha} . \] There is a unique ordered basis β for V for which P = 〚id〛α &larrl &beta. Hence, \[ {\bf B} = [\![\mbox{id}]\!]_{\alpha \to \beta} [\![ T ]\!]_{\alpha} [\![\mbox{id}]\!]_{\alpha \to \beta}^{-1} = [\![ T ]\!]_{\beta} . \] Hence, B also represents T. By symmetry, we see that A and B represent the same set of linear operators. This completes the proof.
   
Example 14: We consider two equivalent matrices \[ {\bf A} = \begin{bmatrix} 1&2 \\ 1&2 \end{bmatrix} \qquad \mbox{and} \qquad {\bf B} = \begin{bmatrix} 1&2 \\ 2&4 \end{bmatrix} . \] Mathematica confirms that both matrices have the same rank:
A = {{1, 2}, {1, 2}}; B = {{1, 2}, {2, 4}};
MatrixRank[A] == MatrixRank[B]
True
First, we use elementary row operations to transfer these matrices to equivalent form: \[ {\bf E}_1 = \begin{bmatrix} 1&0 \\ -1&1 \end{bmatrix} , \qquad {\bf F}_1 = \begin{bmatrix} 1&0 \\ -2&1 \end{bmatrix} . \] Upon multiplications by these matrices, we get \[ {\bf E}_1 {\bf A} = \begin{bmatrix} 1&2 \\ 0&0 \end{bmatrix} , \qquad {\bf F}_1 {\bf B} = \begin{bmatrix} 1&2 \\ 0&0 \end{bmatrix} . \]
E1 = {{1, 0}, {-1, 1}};
E1 . A
{{1, 2}, {0, 0}}
and
F1 = {{1, 0}, {-2, 1}}; F1 . B
{{1, 2}, {0, 0}}
Choosing matrix \[ {\bf E}_2 = \begin{bmatrix} 1&-2 \\ 0&1 \end{bmatrix} , \] we obtain \[ {\bf E}_1 {\bf A} \,{\bf E}_2 = \begin{bmatrix} 1&0 \\ 0&0 \end{bmatrix} \qquad\mbox{and} \qquad {\bf F}_1 {\bf B} \,{\bf E}_2 = \begin{bmatrix} 1&0 \\ 0&0 \end{bmatrix} . \] So these two 2×2 matrices are rank equivalent. However, these matrices are not similar. Assume opposite that there exists matrix \[ {\bf P} = \begin{bmatrix} a&b \\ c&d \end{bmatrix} \] such that \[ {\bf P}\,{\bf A} = {\bf B}\,{\bf P}. \] This matrix equation is equivalent to the system of four equations \[ \begin{split} a+b &= a + 2 c , \\ 2 a + 2 b &= b + 2 d , \\ c+d &= 2 a + 4 c , \\ 2 c + 2 d &= 2 b + 4 d , \end{split} \qquad \Longrightarrow \qquad \begin{split} b &= 2 c , \\ a + c &= d , \\ d &= 2 a + 3 c , \\ 0 &= c + d . \end{split} \]
P = {{a, b}, {c, d}};
P . A
{{a + b, 2 a + 2 b}, {c + d, 2 c + 2 d}}
B . P
{{a + 2 c, b + 2 d}, {2 a + 4 c, 2 b + 4 d}}
We solve these equations with Mathematica
Solve[{ a + c == d, d == 2 a + 3 c, 0 == c + d}, {a, c, d}]
{{c -> -(a/2), d -> a/2}}
Therefore, the corresponding matrix P depends on one parameter: \[ {\bf P} = a \begin{bmatrix} 2&-2 \\ -1&1 \end{bmatrix} , \] which is a singular matrix.
End of Example 14

  1. Let R₁ ⊂ ℝ × ℝ = {(x, y) : |xy| < 1} and x ∼₁ y when (x, y) ∈ R₁. Is this an equivalent relation, and if not---what fails?
  2. Show that matrices \( \displaystyle {\bf A} = \begin{bmatrix} 1&1 \\ 1&1 \end{bmatrix} \quad \) and \( \displaystyle \quad {\bf B} = \begin{bmatrix} 1&1 \\ 3&3 \end{bmatrix} \quad \) are equivalent but not similar.
  3. Let A and B belong to 𝔽n×n. If A is similar to B, show that A² ∼S B.
  4. Let A and C belong to 𝔽n×n. If A is invertible, then A CS C A.

  1. Axier, S., Linear Algebra Done Right. Undergraduate Texts in Mathematics (3rd ed.). Springer. 2015, ISBN 978-3-319-11079-0.
  2. Beezer, R.A., A First Course in Linear Algebra, 2017.
  3. Buchanan, M., Nexus: Small Worlds and the Groundbreaking Science of Networks. W. W. Norton & Co., New York and London, 2002.
  4. Cullen, C.G., Matrices and Linear Transformations 2nd edition, Dover Publications, New York, 1972.
  5. Dillon, M., Linear Algebra, Vector Spaces, and Linear Transformations, American Mathematical Society, Providence, RI, 2023.
  6. Halmos, Paul Richard (1974) [1958]. Finite-Dimensional Vector Spaces. Undergraduate Texts in Mathematics (2nd ed.). Springer. ISBN 0-387-90093-4.
  7. Hefferon, Jim (2020). Linear Algebra (4th ed.). ISBN 978-1-944325-11-4.
  8. Katznelson, Yitzhak; Katznelson, Yonatan R. (2008). A (Terse) Introduction to Linear Algebra. American Mathematical Society. ISBN 978-0-8218-4419-9.
  9. Roman, Steven (2005). Advanced Linear Algebra. Undergraduate Texts in Mathematics (2nd ed.). Springer. ISBN 0-387-24766-1.
  10. Valenza, Robert J. (1993) [1951]. Linear Algebra: An Introduction to Abstract Mathematics. Undergraduate Texts in Mathematics (3rd ed.). Springer. ISBN 3-540-94099-5.