In previous subsection, we saw that any m-by-n matrix A represents a linear transformation
from n-dimensional column vector space 𝔽n,1 to another m-dimensional column space 𝔽m,1. In this subsection, we show that actually any linear transformation T : V ≌ 𝔽n,1 ⇾ W ≌ 𝔽m,1 can be interpreted as matrix multiplication from left.
A linear transformation T : 𝔽n×1 ⇾ 𝔽m×1 is represented by a matrix A when T can be computed using multiplication by matrix A:
\[
T({\bf x}) = \mathbf{A}\,\mathbf{x} , \qquad \forall \mathbf{x} \in \mathbb{F}^{n\times 1} .
\]
Matrices of Linear Transformations
First
though, we want to show how to find the matrix that represents a
given linear map between any two finite dimensional vector spaces.
For any vector x ∈ V ≌ 𝔽n and ordered basis α = [e₁, e₂, … , en] of V, we have
where coefficients 𝑎i,j constitute the transformation matrix ⟦T⟧. This matrix can be used to define matrix multiplication operator (3) upon writing coordinates of vectors x and T(x) in column form. This reveals a powerful fact:
Observation:
If we know where a linear transformation T : V ≌ 𝔽n ⇾ W ≌ 𝔽m sends the ordered basis vectors, we can find its effect on any input vector x ∈ V.
Example 8:
Let us consider the linear transformation defined by
\[
x_1 = -t , \qquad x_2 = t , \qquad x_3 = t , \qquad t \in \mathbb{R} .
\]
Hence ker(T) is the one-dimensional subspace of ℝ³ spanned on the vector (−1, 1, 1).
Let S be the subspace of ℝ³ spanned by i and k. If x ∈ S, then x must be of the form (𝑎, 0, b), and hence T(x) = (𝑎, −b). Clearly, T(S) = ℝ². Since the image of the subspace S is all of ℝ², it follows that the entire range of T must be ℝ².
■
End of Example 8
Let T : V ⇾ W be a linear transformation where V ≌ 𝔽n and W ≌ 𝔽m. Let α = [e₁, e₂, … , en] be an ordered basis (not necessarily standard) in V and β = [ε₁, ε₂, … , εm] be an ordered basis in W. The matrix representation of T with respect to α and β is
\[
[\![ T ]\!]_{\alpha \to \beta} = \begin{bmatrix} \left[ T(\mathbf{e}_1) \right]_{\beta} & \left[ T(\mathbf{e}_2) \right]_{\beta} & \cdots & \left[ T(\mathbf{e}_n) \right]_{\beta} \end{bmatrix} ,
\]
where [T(ei)]β is the coordinate vector written as a column for expansion
\[
\left[ T(\mathbf{e}_i) \right]_{\beta} = a_{i,1} \varepsilon_1 + a_{i,2} \varepsilon_2 + \cdots +a_{i,m} \varepsilon_m .
\]
When V = W and α = β, we write transformation matrix as
\[
[\![ T ]\!]_{\alpha} = \begin{bmatrix} \left[ T(\mathbf{e}_1) \right]_{\alpha} & \left[ T(\mathbf{e}_2) \right]_{\alpha} & \cdots & \left[ T(\mathbf{e}_n) \right]_{\alpha} \end{bmatrix} .
\]
In case of standard basis in 𝔽n, subscript α is dropped.
Example 9:
Let T : ℝ² ⇾ ℝ³ be the linear transformation defined by
\[
T \left( {\bf x} \right) = T \left( x_1 , x_2 \right) = \left( x_1 - x_2 , x_2 , x_1 + x_2 \right) .
\tag{9.1}
\]
Find the matrix representations of T with respect to the ordered bases α = {u₁ , u₂} and β =
{b₁ , b₂, b₃}, where
\[
{\bf u}_1 = \begin{pmatrix} 2 \\ 1 \end{pmatrix} , \qquad {\bf u}_2 = \begin{pmatrix} -1 \\ \phantom{-}3 \end{pmatrix}
\]
and
\[
{\bf b}_1 = \begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix} , \qquad {\bf b}_2 = \begin{pmatrix} -1 \\ \phantom{-}0 \\ \phantom{-}1 \end{pmatrix} , \qquad {\bf b}_3 = \begin{pmatrix} \phantom{-}0 \\ -1 \\ \phantom{-}1 \end{pmatrix} .
\]
Such representation is unique, which could be proved by showing that for any other matrix representation Bx
of transformation T, it follows that A = B.
Example 10:
The transformation T from \( \mathbb{R}^4 \) to \( \mathbb{R}^3 \)
defined by the equations
Although the image under the transformation TA of any vector x in ℝ4 could be computer directly from system of equations (8.1), it is preferable to use matrix (8.2). Remember that you need to interpret vectors as column vectors and transfer the final answer into 4-tuple.
For example, if
Theorem 4:
Every linear transformation from 𝔽n to 𝔽m
is a matrix transformation, and conversely, every matrix transformation from
𝔽n,1 to 𝔽m,1 is a linear transformation.
From theorem 3, we get the first part of theorem for free.
For any linear transformation T : 𝔽n ⇾ 𝔽m, there exists a matrix A ∈ 𝔽m,n such that T(x) = A x, ∀x ∈ 𝔽n.
For the second part, suppose that T(x) = A x.
Then we have:
T(x + y) = A(x + y) = A x + A y = T(x ) + T(y).
T(αx) = A(αx) = αT(x).
The above two properties satisfy the definition of linear transformation. So we have proved that any matrix transformation is a linear transformation.
Example 11:
The vector space ℂ over the field of complex numbers is the vector space ℂ of all complex numbers has complex dimension 1 because its basis consists of one element \( \{ 1 \} . \)
On the other hand, ℂ over the field of real numbers is the vector space of real dimension 2 because its basis consists of two elements \( \{ 1, {\bf j} \} . \)
Let us consider the transformation:
\[
T \left( {\bf z}_1 , {\bf z}_2 \right) = \left( {\bf z}_1 +{\bf j}\, {\bf z}_2 , - {\bf j}\,{\bf z}_1 + {\bf z}_2 \right) ,
\tag{11.1}
\]
where j is the imaginary unit vector on ℂ, so j² = −1. The matrix corresponding to transformation (11.1) is
\[
\left[ T \right] = \begin{bmatrix} 1 & {\bf j} \\ -{\bf j} & 1 \end{bmatrix} .
\]
■
End of Example 11
Theorem 5:
If TA : 𝔽n ⇾ 𝔽m and TB : 𝔽n ⇾ 𝔽m and
TA(v) = TB(v) for
every vector v ∈ 𝔽n, then A = B.
To say that \( T_{\bf A} \left( {\bf v} \right) = T_{\bf B} \left( {\bf v} \right) \) for every
vector in \( \mathbb{R}^m \) is the same as saying that
\[
{\bf A}\,{\bf v} = {\bf B}\,{\bf v}
\]
for every vector v in \( \mathbb{R}^m . \) This will be true, in particular, if v
is any of the standard basis vectors \( {\bf e}_1 , {\bf e}_2 , \ldots , {\bf e}_m \) for
\( \mathbb{R}^m ; \) that is,
Since every entry of ej is 0 except for the j-th, which is 1, it follows
that Aej is the j-th column of A and
Bej is the j-th column of B. Thus,
\( {\bf A}\,{\bf e}_j = {\bf B}\,{\bf e}_j \) implies that corresponding columns of A
and B are the same, and hence A = B.
Example 12:
We consider the Chebyshev differential operator of the third kind:
\[
L_n \left[ x, \texttt{D} \right] = \left( 1- x^2 \right) \texttt{D}^2 - \left( 2x-1 \right) \texttt{D} + n \left( n+1 \right) \texttt{I} , \qquad \texttt{D} = \frac{\text d}{{\text d}x} ,
\]
where I is the identity operator. We consider a particular case of n = 3 and apply the Chebyshev operator L₃ to polynomials from space ℝ≤3[x]. This vector space has standard basis β = [1, x, x², x³]. Application of L₃ to members of basis β yields
\begin{align*}
L_3 \left[ x, \texttt{D} \right] 1 &= 12 ,
\\
L_3 \left[ x, \texttt{D} \right] x &= 1 - 2x + 12x = 1 + 10\,x ,
\\
L_3 \left[ x, \texttt{D} \right] x^2 &= 2 + 2 x + 6 x^2 ,
\\
L_3 \left[ x, \texttt{D} \right] x^3 &= 6 x + 3 x^2 .
\end{align*}
We check with Mathematica:
The above theorem tells us that there is a one-to-one correspondence
between m-by-n matrices and matrix transformations from
𝔽n ≌ 𝔽n×1 to 𝔽m ≌ 𝔽m×1 in the sense that every
m×n
matrix A generates exactly one matrix transformation
(multiplication by A) 𝔽n ⇾ 𝔽m and every matrix transformation from 𝔽n,1 to
𝔽m,1 arises from exactly one m × n
matrix: we call that matrix the standard matrix for the transformation, which is given by the formula:
\[
[\![T]\!] = \left[ T \left( {\bf e}_1 \right) \,|\, T \left( {\bf e}_2 \right) \,|\, \cdots \,| T \left( {\bf e}_n \right) \right] ,
\]
where e1, e2, … , en is the list of basis vectors for 𝔽n.
This suggests the following procedure for finding standard matrices.
Algorithm for finding the standard matrix of a linear transformation:T : V ≌ 𝔽n ⇾ W ≌ 𝔽m.
Step 1: Find the images T(ei) of the standard basis vectors \( {\bf e}_1 , {\bf e}_2 , \ldots , {\bf e}_n \) for 𝔽nStep 2: Construct the matrix ⟦T⟧ that has the images obtained in Step 1 as its successive
columns. This matrix is the standard matrix for the transformation.
Example 13: Find the standard matrix for transformation ℝ³ ⇾ ℝ² A for the linear transformation: