The Wolfram Mathematic notebook which contains the code that produces all the Mathematica output in this web page may be downloaded at this link.
Caution: This notebook will evaluate, cell-by-cell, sequentially, from top to bottom. However, due to re-use of variable names in later evaluations, once subsequent code is evaluated prior code may not render properly. Returning to and re-evaluating the first Clear[ ] expression above the expression no longer working and evaluating from that point through to the expression solves this problem.
$Post :=
If[MatrixQ[#1],
MatrixForm[#1], #1] & (* outputs matricies in MatrixForm*)
Remove[ "Global`*"] // Quiet (* remove all variables *)
This section is devoted to illustrations of linear transformations on the plane using square matrices. This may help you to develop a geometric understanding of matrices and their relationship to coordinate space transformations in general.
Every square matrix can be considered (and is indeed) as an operator that acts on column vectors from left-to-right, with output also written as column vector.
We write these matrices in brackets. Also, the same square matrix can be considered as a matrix multiplication operator acting on row vectors from right. We embrace these matrices in parentheses.
Then each linear transformation ℝn ⇾ ℝn is
associated with a square n×n matrix A, and vise versa. When ℝn is recognized as the space of either column vectors, ℝn×1, or row vectors, ℝ1×n, a linear transformation in it is generated by matrix multiplication either from left: A x, or from right u A. Here x is an arbitrary column vector, x ∈ ℝn×1, and u is a row vector, u ∈ ℝ1×n. The relationship between these two forms of the matrix is called transposition (this is the topic of Part 2).
Plane linear transformations can be classified as reflections (or mirroring), contractions/expansions, shears, rotations, and projections (which is a topic of another section). The
following subsections give appropriate terminologies for such transformations. The difference between matrix multiplication A x or u A and corresponding linear transformation TA : ℝ² ⇾ ℝ² is merely a matter of notation. Therefore, we can classify matrices instead of as linear maps. A common approach is to synthesize an arbitrary matrix from a limited set of
primitive transformations.
Linear transformations can be decomposed into product (when we speak about matrices) or compositions (when we consider mappings) of simple operations. This decomposition into primitive or elementary matrices is not unique that allows to present the same transformation in a variety of simple compositions.
Any 2D Linear Transformation can be decomposed into the product
of a rotation, a scale (or line reflection), and a rotation,
A = R₁SR₂.
Any 2D congruence (being the same size and shape) can be decomposed into the product of 3 line
reflection at the most.
We mention the following properties of matrix transformations:
linearity,
closed under composition,
associativity,
not commutative,
applied left-to-right multiplication to column vectors,
applied right-to-left multiplication to row vectors.
Plane Transformations
We demonstrate linear transformations acting on a house that we place at the origin, which is the corner {0,0} in the two-dimensionalEuclidean plane. Matrix algebra is used to move this house with each transformation having the same common corner at the origin. The first is not a transformation but the baseline beginning point that will be transformed as we proceed. The object of choice is a parallelogram as transformation of the vector points is all that is required to make the change we desire.
The base of the house (ignoring the roof and front door) when in the northeast quadrant is composed of the four corners represented by the Identity Matrix or basis vectors, i and j, which are
Notice when only the square base of a standard solid color is involved you cannot tell if the house is upright, flipped on its side or upside down. Thus,
below we add a roof and door help to orient the viewer.
Uniform scale
A diagonal matrix has non-zero numbers only on its left-to-right diagonal. If these entries are the same, this type of matrix
defines a uniform scaling transformation:
where 𝑎 is a positive number. There are two kinds of uniform transformations.
Dilation (or expansion) is a transformation accomplished by matrix multiplication similar to the following:
\[
\begin{bmatrix} a & 0 \\ 0 & a \end{bmatrix} , \qquad \mbox{with} \quad a > 1.
\]
We can make our house 50% bigger. Note some transforms are matrix multiplications, not dot products.
Let n be the unit vector parallel to the direction of scale and k to be the scale factor, a vector transformed by this scale operations can be represented as
Any vector perpendicular to n is not affected by scale operation, therefore, v = x⊥ + v∥. Since
x∥ is parallel to the direction of scale then
v∥ = kx∥
Reconstructing the solution from the observations above
Shearing is a transformation that skews the coordinate space, either stretching or shrinking. Angles are not preserved;
however, surprisingly, areas
are. Shearing is usually achieved by adding multiple of one coordinate to the other.
We abbreviate shear matrices with Cyrillic letter "Ш," which is pronounced as "sha" in English or "ch" in French. Shear 2-by-2 matrices can operate either on column vectors from left or on row vectors from right. We distinguish these operators by writing them in either brackets of in parentheses:
\begin{equation} \label{EqShear.1}
{\bf Ш}_{x} (\alpha ) = \begin{bmatrix} 1 & \alpha \\ 0 & 1 \end{bmatrix} \quad \Longrightarrow \quad {\bf Ш}_{x} (\alpha ) \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} x + \alpha \,y \\ y \end{bmatrix} .
\end{equation}
A horizontal shear is observed when the vector (1, 0) pointing in the horizontal direction is fixed, but the vector (0, 1) pointing in the vertical direction is taken to the vector (𝑎, 1), where 𝑎 is some
real number. Notice that if 𝑎 > 0, then our horizontal shear pushes the top of the blue
parallelogram to the right. If 𝑎 < 0, then we
push the top of the blue parallelogram to the left. It is easy to see from the definition of this horizontal shear that the corresponding matrix is either \eqref{EqShear.1} or \eqref{EqShear.2}. Shearing towards the y-axis is
The code for the roof does not play well with the shear transform, so we dispense with the roof below, leaving only a door to remind us we started with a house. There are a number of ramifications to this worth mentioning before continuing. First, the roof is a triangle. Matrix algebra is about matrices, which are by their nature rectangles, not triangles. The reason the door survives in the illustration is that it is a rectangle, like the base. Second, when the base of the house is a rectangle the roof "follows" the base in the code, which is not true when the base is a non-rectangular parallelogram. This respects the reality that all rectangles are parallelograms but the converse is not true. Third, the house is a metaphor, an abstraction to advance the pedagogy in connection with linear algebra. We must be careful taking pedagogy into the real world. Put a roof on a base that is a tilted parallelogram and watch the house fall down to prove the importance of having the load orthogonal to its support in the real world where gravity matters.
A vertical shear would fix (0, 1) and would push the vertical
component of the vector (1, 0) up 𝑎 units to the vector (1, 𝑎). So the matrix for a vertical shear is
where n ∈ ℝ³ is a unit vector to the plane of reflection and n • v denotes the dot product of two vectors.
Let a reflection about a line L through the origin that makes an angle θ with the abscissa (x-axis) be denoted as Ref(θ). The corresponding matrix for reflection is
Initially, the house is in the northeast quadrant of the Cartesian plane. Anything multiplied by the identity matrix is unchanged by that operation. So, beginning with the IdentityMatrix (original house) and proceeding clockwise, we name the four transformation matrices which first leaves the house where it is, then moves it, in three operations, from quadrant to quadrant.
Now we consider a linear transformation that reflects vectors across a line L that makes an angle θ with the x-axis (known as abscissa). The matrix that corresponds to such transformation is
Example 2:
A picture in the plane can be stored in the computer as a set of vertices. The vertices can then be plotted and connected by lines to produce the picture. If there are n vertices, they are stored in a 2 × n matrix. The x-coordinates of the vertices are stored in the first
row and the y-coordinates in the second. Each successive pair of points is connected by
a straight line.
For example, to generate a rhombus with vertices (0, 0), (1, 1), (0, 2), and (1, −1), we store
the pairs as columns of a matrix:
\[
{\bf T} = \begin{bmatrix} 0 & 1 & 0 & 1 & 0 \\ 0 & 1 & 2 & -1 & 0 \end{bmatrix} .
\]
An additional copy of the vertex (0, 0) is stored in the last column of T so that the
previous point (1, −1) will be connected back to (0, 0) [see Figure 4.2.3(a)].
Leon, page 205, Figure 4.2.3
We can transform a figure by changing the positions of the vertices and then
redrawing the figure. If the transformation is linear, it can be carried out as a matrix multiplication. Viewing a succession of such drawings will produce the effect of animation.
The four primary geometric transformations that are used in computer graphics are
as follows:
Dilations and contractions. A linear operator of the form
\[
T({\bf x}) = c\,{\bf x}
\]
is a dilation if c > 1 and a contraction if 0 < c < 1. The operator T is represented by the matrix cI, where I is the 2 × 2 identity matrix. A dilation increases
the size of the figure by a factor c > 1, and a contraction shrinks the figure by a
factor c < 1. Figure 4.2 shows a dilation by a factor of 1.5 of the rhombus stored in the matrix T.
Reflections about an axis. If Tx is a transformation that reflects a vector x
about the x-axis, then Tx is a linear operator and hence it can be represented
by a 2 × 2 matrix A. Since
\[
T_x ({\bf e}_1 ) = {\bf e}_1 \qquad \mbox{and} \qquad T_x ({\bf e}_2 ) = -{\bf e}_2 ,
\]
it follows that
\[
{\bf A} = \begin{bmatrix} 1 & \phantom{-}0 \\ 0 & -1 \end{bmatrix} .
\]
Leon, page 205, Figure 4.2.3
Similarly, if Ty is the linear operator that reflects a vector about the y-axis, then
Ty is represented by the matrix
\[
\left[ T_y \right] = \begin{bmatrix} -1 & 0 \\ \phantom{-}0 & 1 \end{bmatrix} .
\]
Figure 4.2.3(c) shows the image of the rhombus after a reflection about the
y-axis.
Leon, page 205, Figure 4.2.3
Rotations. Let T be a transformation that rotates a vector about the origin by
an angle θ in the counterclockwise direction. We saw in Example ???? that T is a
linear operator and that T(x) = A x, where
\[
{\bf A} = \begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \phantom{-}\cos\theta \end{bmatrix} .
\]
Figure 4.2.3(d) shows the result of rotating the triangle T by 60 ◦ in the
counterclockwise direction.
Leon, page 205, Figure 4.2.3
Translations. A translation by a vector a is a transformation of the form
\[
T({\bf x}) = {\bf x} + {\bf a} .
\]
If a ≠ = 0, then T is not a linear transformation and hence T cannot be represented by a 2 × 2 matrix. However, in computer graphics it is desirable to do
all transformations as matrix multiplications. The way around the problem is to
introduce a new system of coordinates called homogeneous coordinates.
End of Example 2
Theorem 1:
If E is an elementary matrix, then TE : ℝ2×1 ⇾ ℝ2×1 is one of the following:
An expansion along a coordinate axis.
A compression along a coordinate axis.
A shear along a coordinate axis.
A reflection along y = x.
A reflection about a coordinate axis.
A compression or expansion along a coordinate axis followed by reflection about a coordinate axis.
Because a 2 × 2 elementary matrix results from performing a single elementary row operation on the 2 × 2 identity matrix, such a matrix must have one of the following forms:
\[
\begin{bmatrix} a & 0 \\ 0 & 1 \end{bmatrix}, \quad \begin{bmatrix} 1 & 0 \\ 0 & a \end{bmatrix}, \quad \begin{bmatrix} 1 & 0 \\ a & 1 \end{bmatrix}, \quad \begin{bmatrix} 1 & a \\ 0 & 1 \end{bmatrix}, \quad \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} .
\]
The third and fourth matrices represent shears along coordinate axes, and the first two represent compressions or expansions along coordinate axes depending whether 0 < 𝑎 < 1 or 𝑎 > 1. If 𝑎 < 0, and if we set 𝑎 = −k, where ;k > 0 , then these matrices can be written as
\[
\begin{bmatrix} a & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} -k & 0 \\ \phantom{-}0 & 1 \end{bmatrix} = \begin{bmatrix} -1 & 0 \\ \phantom{-}0 & 1 \end{bmatrix} \cdot \begin{bmatrix} k & 0 \\ 9 & 1 \end{bmatrix}
\tag{P.1}
\]
and
\[
\begin{bmatrix} 1 & 0 \\ 0 & a \end{bmatrix} = \begin{bmatrix} 1 & \phantom{-}0 \\ 0 & -1 \end{bmatrix} = \begin{bmatrix} 1 & \phantom{-}0 \\ 0 & -1 \end{bmatrix} \cdot \begin{bmatrix} 1 & 0 \\ 1 & k \end{bmatrix} .
\tag{P.2}
\]
Since k > 0, the product in Eq.(P.1) represents a compression or expansion along the abscissa followed by a reflection about the ordinate, and Eq.(P.2) represents a compression or expansion along the ordinate followed by a reflection about the abscissa. In the case where 𝑎 = −1, transformations (P.1) and (P.2) are simply reflections about the ordinate and abscissa, respectively.
The last matrix represents a reflection about y = x.
Example 3:
Let us consider the matrix
\[
{\bf A} = \begin{bmatrix} 0 & 1 \\ \frac{3}{2} & 1 \end{bmatrix} .
\]
This matrix can be reduced to the identity matrix as follows:
\[
\begin{bmatrix} 0 & 1 \\ \frac{3}{2} & 1 \end{bmatrix} \, \rightarrow \, \begin{bmatrix} \frac{3}{2} & 1 \\ 0 & 1 \end{bmatrix} \, \rightarrow \, \begin{bmatrix} 1 & \frac{3}{2} \\ 0 & 1 \end{bmatrix} \, \rightarrow \, \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} .
\]
First, we interchange first and second rows. Then we divide by 3/2 every entry in the first row. Finally, we add −⅔ times the second row to the first row.
These three successive row operations can be performed by multiplyingA on the left successively by
\[
{\bf E}_1 = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} , \qquad {\bf E}_2 = \begin{bmatrix} \frac{2}{3} & 0 \\ 0 & 1 \end{bmatrix} , \qquad {\bf E}_3 = \begin{bmatrix} 1 & -\frac{2}{3} \\ 0 & \phantom{-}1 \end{bmatrix} .
\]
Inverting these matrices, we get
Reading from right to left, we can see that the geometric effect of multiplying by A is equivalent to successively
shearing by factor of ⅔ in the abscissa direction;
expanding by a factor of 3/3 in the x-direction;
reflecting about the line y = x.
The figure below illustrates the matrix decomposition.
code:
End of Example 3
Example 4:
Using Eq.\eqref{EqPlane.1} with θ = π/3, we get
\[
{\bf A} = \frac{1}{2} \begin{bmatrix} -1 & \sqrt{3} \\ \sqrt{3} & 1 \end{bmatrix} .
\]
The matrix A can be reduces to the identity matrix as follows:
\[
{\bf A} = \frac{1}{2} \begin{bmatrix} -1 & \sqrt{3} \\ \sqrt{3} & 1 \end{bmatrix} \,\rightarrow \, \begin{bmatrix} 1 & -\sqrt{3} \\ \frac{\sqrt{3}}{2} & \frac{1}{2} \end{bmatrix} \,\rightarrow \, \begin{bmatrix} 1 & -\sqrt{3} \\ 0 & \frac{5}{2} \end{bmatrix} \,\rightarrow \,
\]
We multiply the first row by −2, then we add −2/√3 times the second row
End of Example 4
Example 5:
In order to rotate by angle θ = 5π/4, we apply formula \eqref{EqPlane.2}:
\[
{\bf A} = \begin{bmatrix} - \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \end{bmatrix} = \frac{1}{\sqrt{2}} \begin{bmatrix} -1 & 1 \\ -1 & 1 \end{bmatrix} .
\]
End of Example 5
Orthogonal projection
Suppose that vector u ∈ ℝ² is given. It generates a straight line L = span(u) formed by all scalar multiples of u. Then arbitrary vector v ∈ ℝ² can be uniquely decomposed into sum of two vectors
Here v • u = v₁u₁ + v₂u₂ is dot product and \( \displaystyle \| {\bf u} \|^2 = u_1^2 + u_2^2 \) is Euclidean norm of vector u. Vector v∥ is called the projection of v on line L and is denoted as Pu(v).
Note that any vector that lies in the 2d line or 3d plane perpendicular to u will not be affected by the scale operation.
Find the standard matrix for a linear transformation T: ℝ²
↦ ℝ² that first reflects points through the horizontal
x1-axis and then reflects points through the line
x1 = x2.
Find the standard matrix for a linear transformation T: ℝ²
↦ ℝ² that first rotates points through -3π/4 radian
(clockwise) and then reflects points the vertical x2-axis.
Find the standard matrix for a linear transformation T: ℝ²
↦ ℝ² that maps i=(1,0) into 2i-3j but leaves
the vector j=(0,1) unchanged.
Find the standard matrix for a linear transformation T: ℝ²
↦ ℝ² that rotates points (about the origin) through
3π/2 radians (counterclockwise).
In ℝ², clearly R(θ+φ) = R(θ) R(φ). By writing out these matrices and performing matrix multiplication, derive the laws for the sine and cosine of the sum of two angles.
If you need the formulas for sin(θ + π/2) and cos(θ + π/2) and don't remember them, what is a simple way to find them ?
Find all 2 × 2 rotation matrices that are also diagonal.
In ℝ², if the list of vertices of a square starts with (0, 0) and (𝑎, b) going counterclockwise, what are the remaining two vertices? (Hint: The vertex opposite (𝑎, b) can be obtained by rotating (𝑎, b) by 90° about the origin.)
Find the standard matrix for a linear transformation T: ℝ²
↦ ℝ² that rotates points (about the origin) through -π/4
radians (clockwise).
If \( {\bf A} = \begin{bmatrix} 1&2 \\ -1&-2
\end{bmatrix} , \) find two matrices B ≠ C such that
AB = AC.
Suppose that numbers 𝑎 and b in matrix \( \displaystyle \quad \begin{bmatrix} \phantom{-}a&b \\ -b&a \end{\bmatrix} \quad \) are not both zero. Find the entries of rotation matrix that takes (1, 0) to a unit vector in the direction of (𝑎, b). (You don't need to express the angle of the rotation.)
Show that a matrix \( \displaystyle \quad {\bf A} = \begin{bmatrix} \phantom{-}a&b \\ -b&a \end{\bmatrix} \quad \) is equal to a rotation matrix times a scalar matrix rI with r > 0. (Hence, x ← x A preserves shapes and orientation while expanding or contracting the size uniformly.)
Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International
Dunn, F. and Parberry, I. (2002). 3D math primer for graphics and game development. Plano, Tex.: Wordware Pub.
Foley, James D.; van Dam, Andries; Feiner, Steven K.; Hughes, John F. (1991), Computer Graphics: Principles and Practice (2nd ed.), Reading: Addison-Wesley, ISBN 0-201-12110-7