es

This section shows how to take a sequence of transformation matrices and combine (or concatenate) them into one single transformation matrix. This new matrix represents the cumulative result of applying all of the original transformations in order. It’s actually quite easy. The transformation that results from applying the transformation with matrix A followed by the transformation with matrix B has matrix B A if you apply them to column vectors, or A B when matrices act on row vectors. That is, matrix multiplication is how we compose transformations represented as matrices.

One very common example of this is in rendering. Imagine there is an object at an arbitrary position and orientation in the world. We wish to render this object given a camera in any position and orientation. To do this, we must take the vertices of the object (assuming we are rendering some sort of triangle mesh) and transform them from object space into world space. This transform is known as the model transform, which we denote Mobj→world. From there, we transform world-space vertices with the view transform, denoted Mworld→cam, into camera space. Since computer graphics work with row vectors, the corresponding math gives the following:

\begin{align*} {\bf p}_{world} &= \mathbf{p}_{obj} {\bf M}_{obj \to world} , \\ {\bf p}_{cam} &= \mathbf{p}_{obj} {\bf M}_{world \to cam} \\ &= \mathbf{p}_{obj} {\bf M}_{obj \to world} {\bf M}_{world \to cam} . \end{align*}

Compositions

Let U, V, and W be vector spaces over the same field 𝔽, and let T : UV and S : VW be linear transformations. The composition of S and T is the transformation ST : UW given by \[ \left( S \circ T \right) ({\bf u}) = (S(T({\bf u})) = S(T)({\bf u}) . \]
   
Example 1: Let us consider a linear endomorphism \[ T \left( \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \right) = \begin{bmatrix} 2\,x_1 + x_2 \\ x_1 -2\, x_2 \end{bmatrix} . \] This linear transformation T : ℝ2×1 ⇾ ℝ2×1 can be expressed as a matrix/vector multiplication: \[ T_A ({\bf x}) = {\bf A}\,{\bf x} , \] where \[ {\bf x} = \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} , \qquad {\bf A} = \begin{bmatrix} 2&\phantom{-}1 \\ 1&-2 \end{bmatrix} \]    ■
End of Example 1
Theorem 1: The composition of two linear transformations is a linear mapping.
Let T : UV and S : VW be linear transformations. We will show that S∘T is linear. For all vectors u₁ and u₂ of U and scalars α and β, we have: \begin{align*} (S∘T)\left(\alpha\,,{\bf u}_1 + \beta\,{\bf u}_2 \right) &= S(T\left( \alpha\, {\bf u}_1 + \beta\,{\bf u}_2 \right) ) \\ &= S( \alpha\,T( {\bf u}_1 ) +\beta\, T({\bf u|_2 )) \\ &= \alpha\,S(T({\bf u}_1)) + \beta\, S(T({\bf u}_2)) \\ &= \alpha\,(S∘T)({\bf u}_1) + \beta\, (S∘T)({\bf u}_2) . \end{align*}
   
Example 2:    ■
End of Example 2
Theorem 2: Composition of linear transformations is associative. In other words, for linear transformations T, S, and R \[ \left( R \circ S \right) \circ T = R \circ \left( S \circ T \right) . \]
For all u in U, we have: \begin{align*} ((R∘S)∘T)({\bf u}) &= (R∘S)(T({\bf u})) =R(S(T({\bf u})) \\ &= R((S∘T)({\bf u})) = (R∘(S∘T))({\bf u}) . \end{align*}
   
Example 3:    ■
End of Example 3
Theorem 3: Let U, V, and W be finite dimensional vector spaces over the same field 𝔽 with ordered basis α, β, and γ, respectively. Let T : UV and S : VW be two linear maps with corresponding matrices 〚T ⟦α → β and 〚S ⟦β → γ. Then matrix of their composition is equal to the product of corresponding matrices, \[ [\![ S\circ T ]\!]_{\alpha \to\gamma} = [\![ S ]\!]_{\alpha \to\beta} [\![ T ]\!]_{\beta \to\gamma} . \]
   
Example 4:    ■
End of Example 4
Theorem 4: Let f : UV and g : UW be two linear transformations. A mapping h : VW such that g = hf = h f to exist, it is necessarily and sufficient that ker(f) ⊆ ker(g). If this condition is satisfied and image(f) = V, then h is unique.
If h exists, then g = h f implies that g(x) = h f(x) = 0 if f(x) = 0. Therefore, ker(f) ⊆ ker(g).

Conversely, let ker(f) ⊆ ker(g). We first construct h on the subspace image(f) ⊆ V. The only possibility is to set h(y) = g(x) if y = f(x). It is necessary to verify that h is determined uniquely and linearly on image(f). The first property follows from the fact that if y = f(x₁) = f(x₂), then x₁ − x₂ ∈ ker(f) ⊆ ker(g), hence g(x₁) = g(x₂). The second property follows automatically from the linearity of f and g.

Now it is sufficient to extend the mapping h from the subspace image(f) C V into the entire space V, for example, by selecting a basis in image(f) , extending it up to a basis in V, and setting h equal to zero on the additional vectors.

   
Example 5:    ■
End of Example 5

 

More about Inverting


Lemma 1: If T : UV and S : VW are two linear maps, then: \[ S \circ T \, : \ U ⇾ V ⇾ W \mbox{ injective} \qquad \Longrightarrow \qquad T \mbox{ injective} . \]
   

Example 6: When W = U in Lemma 1 and assume that \[ S \circ T = {\text id}_U \qquad \Longrightarrow \qquad T\mbox{ ijective and $S$ is surjective}. \] In general, nothing more can be said about S and T of being partly inverse (meaning either left-inverse or right-inverse) of each other.

Let us consider two linear mappings T, S ℤ ⇾ ℤ = {0, ±1, &pml2, … } defined as \[ T(n) = 2n , \qquad S(2n) = S(2n+1) = n . \] In other words \[ S(m) = \left\lfloor \frac{m}{2} \right\rfloor , \] where ⌊ ⌋ is the floor operation. Since S(T)(n) = (ST)(n) = S(2n) = n, we have \[ S \circ T = id_{\mathbb{Z}} . \] However, neither T nor S is bijective: Neither is invertible. The left inverse S of T is not a right inverse of T.    ■

End of Example 6
Lemma 2: If T : UV and S : VW are two linear maps, then: \[ S \circ T \, : \ U ⇾ V ⇾ W \mbox{ surjective} \qquad \Longrightarrow \qquad S \mbox{ surjective} . \]
   
Example 7:    ■
End of Example 7

If linear transformations T and S are invertible, then their composition is also invertible and

\[ \left( S \circ T \right)^{-1} = T^{-1} \circ S^{-1} . \]
Let us show how one may establish the main invertibility result without the rank theory.
Theorem 5: Let T : VV be a linear endomorphism in a finite dimensional vector space V. If T is left invertible, then T is invertible.
Let n = dim(V), so that the dimension of the space of linear maps VV is N = n² (This follows from equivalent representing of any endomorphism by the square matrix.) The N + 1 linear maps \[ T , \ T^2 , \ T^3 , \ \ldots , T^{N+1} \] cannot be linearly independent. Then \[ \sum_{i=1}^{N+1} c_i T^i = 0 \] be a nontrivial linear dependence relation. There is a smallest index j with \[ c_j T^j + c_{j+1} T^{j+1} + \cdots + c_{N+1} T^{N+1} = 0 , \qquad c_j \ne 0 . \] Multiplying by the inverse of cj, we get a simpler relation \[ T^j \left( I + b_{j+1} T + \cdots \right) = 0 \qquad b_k = c_k / c_j . \] Using the left inverse of T (j times), we deduce \[ I + b_{j+1} T + b_{j+2} T^2 + \cdots = 0 \] Hence, \[ I = T \left( - b_{j+1} I - b_{j+2} T - \cdots \right) , \] showing that T has a right inverse and is thus invertible.
   
Example 8: Let A be a nilpotent matrix of size n x n, satisfying Ak = 0 for some integer k⩾ 1. Then In ± A are invertible and for example \[ \left( {\bf I} - {\bf A} \right)^{-1} = {\bf I} + {\bf A} + {\bf A}^2 + \cdots + {\bf A}^{k=1} , \] where I is the identity matrix. Indeed, the product of this finite sum with InA is \begin{align*} \left( {\bf I} - {\bf A} \right)^{-2} &= \left( {\bf I} - {\bf A} \right) \left( {\bf I} - {\bf A} \right) \\ &= {\bf I} + {\bf A} + {\bf A}^2 + \cdots + {\bf A}^{k=1} \\ & \quad - {\bf A} \left( {\bf I} + {\bf A} + {\bf A}^2 + \cdots + {\bf A}^{k=1} \right) \\ &= {\bf I}_n - {\bf A}^k \\ &= {\bf I}_n \end{align*}    ■
End of Example 8
   
  1. Let T : ℝ³ ⇾ ℝ³ be given by left multiplication by the matrix \[ {\bf A} = \begin{bmatrix} a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{bmatrix} \] in the canonical basis e₁, e₂, e₃. What is the matrix of T in the basis e₃, e₂, e₁?

 

  1. Axler, Sheldon Jay (2015). Linear Algebra Done Right (3rd ed.). Springer. ISBN 978-3-319-11079-0.