es

This section is divided into a number of subsections, links to which are:

2D Rotations

3D Rotations

Euler Rotation Theorem

3D Rotation vs Shearing

Quaternions

Compositions

Cayley Representation

 

Every linear transformation T : 𝔽n ⇾ 𝔽n can be uniquely (subject to utilizing standard basis) associated with a square matrix ⟦T⟧ ∈ 𝔽n×n and vice versa. This one-to-one correspondence between linear transformations and square matrices allows us to identify every transformation with a matrix that acts on vectors upon multiplication. Therefore, analysis of matrices is the same as dealing with linear transformations. Out of many matrices/transformations there is a very important class consisting of invertible matrices.

The general linear group associated with 𝔽n is the set of invertible matrices in 𝔽n×n under matrix multiplication. This group is usually denoted as GLn(𝔽) or just GLn.

In what follows, we consider rotations as elements of an orthogonal group under field ℝ of real numbers. Matrices that correspond to positive (= counterclockwise) rotation operations provide a very valuable example of special class of matrices. The most general three-dimensional rotation matrix represents a counterclockwise rotation by an angle θ about a fixed axis that lies along the unit vector n, also usually denoted by \( \displaystyle \hat{\bf n} = (n_x , n_y , n_z ). \)

The orthogonal group associated with 𝔽n is \[ O_n \left( \mathbb{F} \right) = \left\{ \mathbf{A} \in \mbox{GL}_n (\mathbb{F}) \ : \ \mathbf{A}^{\mathrm T} = \mathbf{A}^{-1} \right\} . \] The subgroup SO(n) consisting of orthogonal matrices with determinant +1 is called the special orthogonal group, and each of its elements is a special orthogonal matrix.
   
Example 1: Let us consider an arbitrary 2-by-2 matrix from O2(ℝ) \[ \mathbf{A} = \begin{bmatrix} a&b \\ c&d \end{bmatrix} , \] where 𝑎, b, c, and d are some real numbers. Then we have \[ \mathbf{A}\,\mathbf{A}^{\mathrm T} = \begin{bmatrix} a^2 + b^2 & ac + bd \\ ac + bd & c^2 + d^2 \end{bmatrix} \]
Clear[A, a, b, c, d]; A = {{a, b}, {c, d}} A . A\[Transpose]
\( \displaystyle \quad \begin{pmatrix} a^2 + b^2 & a\, c + b\, d \\ a\, c + b\, d & c^2 + d^2\end{pmatrx} \)
Since this product of these two matrices must be equal the identity matrix, we get the conditions: \begin{align*} a^2 + b^2 &= 1 , \\ c^2 + d^2 &= 1, \\ ac + bd &= 0. \end{align*} Because the first two equations involve variables which lie on the unit circle, we can provide trigonometric representations of them. \[ a = \pm \cos\varphi , \quad b = \pm \sin\varphi . \quad c = \pm \cos \psi , \quad d= \pm\sin\psi . \] Here is how Mathematica approaches this
(* Define the angles *) \[Psi] = Symbol["\[Psi]"]; \[CapitalPsi] = Symbol["\[CapitalPsi]"];
(* Express a, b, c, and d in terms of trigonometric functions *) a = Cos[\[Psi]]; b = Sin[\[Psi]]; c = Cos[\[CapitalPsi]]; d = Sin[\[CapitalPsi]]; (* Verify the equations *) eq1 = a^2 + b^2 == 1
Cos[\[Psi]]^2 + Sin[\[Psi]]^2 == 1
eq2 = c^2 + d^2 == 1
Cos[\[CapitalPsi]]^2 + Sin[\[CapitalPsi]]^2 == 1
PowerExpand[Sqrt[eq1[[1, 1]]]] TrueQ[% == a]
Cos[\[Psi]]
True
Piecewise[{{Cos[\[CapitalPsi]], Cos[\[CapitalPsi]] >= 0}, {-Cos[\[CapitalPsi]], Cos[\[CapitalPsi]] < 0}}]
\( \displaystyle \quad \left\{ \begin{array}{ll} \cos [\varphi ] & \cos [\varphi ] \ge 0 \\ - \cos [\varphi ] & \cos [\varphi ] < 0 \\ 0 & \mbox{True}\end{array} \right. \)
Effectively, there are two possibilities for the sign choices in the latter in order to satisfy 𝑎c + bd = 0. One is \[ \cos\varphi \,\cos\psi + \sin\varphi \,\sin\psi = \cos \left( \varphi - \psi \right) = 0 . \] The second is \[ \cos\varphi \,\cos\psi - \sin\varphi \,\sin\psi = \cos \left( \varphi + \psi \right) = 0 . \] In the first case, we can take φ − ψ = π/2 and use trigonometric identities to find that cos ψ = sin φ and sin ψ = − cos φ. We then have \[ \mathbf{A} = \begin{bmatrix} \cos\varphi & \sin\varphi \\ \sin\varphi & -\cos\varphi \end{bmatrix} , \qquad \det\mathbf{A} = -1. . \]
A = {{Cos[\[Psi]], Sin[\[Psi]]}, {Sin[\[Psi]], -Cos[\[Psi]]}};
Det[A]
% // FullSimplify
Cos[\[Psi]]^2 - Sin[\[Psi]]^2
-1
The same approach applies in the second case. We can take φ + ψ = π/2 and use identities to get cos ψ = − sin φ and sin ψ = cos φ. This gives us \[ \mathbf{A} = \begin{bmatrix} \cos\varphi & \sin\varphi \\ -\sin\varphi & \cos\varphi \end{bmatrix} , \qquad \det\mathbf{A} = 1. . \]
A = {{Cos[\[Psi]], Sin[\[Psi]]}, {-Sin[\[Psi]], Cos[\[Psi]]}}
Det[A]
% // FullSimplify
Cos[\[Psi]]^2 + Sin[\[Psi]]^2
1
Some examples of matrices in O₂(ℝ), aside from the identity matrix I₂, are \[ \begin{bmatrix} 0&1 \\ 1&0 \end{bmatrix} , \qquad \begin{bmatrix} 0&1 \\ -1& 0 \end{bmatrix} , \qquad \begin{bmatrix} 1&0 \\ 0& -1 \end{bmatrix} , \qquad \begin{bmatrix} -1&0 \\ 0& 1 \end{bmatrix} . \] Later, we will see that each element in O₂(ℝ) is either a rotation or an orthogonal reflection.    ■
End of Example 1
    An n-by-n special orthogonal matrix is considered a rotation matrix because it has orthonormal columns. The requirement of its determinant to be +1 can be interpreted as a higher-dimensional version of the right-hand rule. As a linear transformation, every special orthogonal matrix acts as a rotation in counterclockwise direction. Moreover, rotation operators constitute a background on the Wigner-Eckart theorem in representation theory and quantum mechanics.

Rotations

An in-depth study of rotations can become a lifelong project so we restrict ourselves to rotations in ℝ² and ℝ³. Unlike two dimensional case, rotations do not commute. For instance, Ry(45°)∘Rx(90°) ≠ Rx(90°)∘Ry(45°), where Rx(θ) is rotation around the x-axis by θ in ℝ³. (As usual, B∘A means the composition of action A followed by action B.) By a rotation, we always mean a linear transformation in either ℝ² or ℝ³.

Any 3D isometry with a fixed point O has an invariant plane and an invariant axis which intersect at right angle at O. It is a customary to define rotation to be positive if rotation occurs in counterclockwise direction. An orientation–preserving isometry with fixed point O is a 3D rotation through a certain angle around its invariant axis in its invariant plane.

The previous example shows that rotations on ℝ² are commutative. We treat the identity matrix as a rotation through 0 radians. With that, we can say that the rotations on ℝ² are the mappings in O₂(ℝ) with determinant 1. Since det(A B) = detA detB for matrices A and B when the product A B is defined, and since det(A−1) = 1/det(A) when det(A) ≠ 0, we see that the rotations on ℝ² form a subgroup of O₂(ℝ). That subgroup is denoted SO₂(ℝ) and is called the special orthogonal group on ℝ².

In general, the orthogonal group in dimension n has two connected components. The one that contains the identity element is a normal subgroup, called the special orthogonal group, and denoted SO₂(ℝ) or SO(n). It consists of all orthogonal matrices of determinant 1. This group is also called the rotation group, generalizing the fact that in dimensions 2 and 3, its elements are the usual rotations around a point (in dimension 2) or a line (in dimension 3). In low dimensions, these groups have been widely studied. The other component consists of all orthogonal matrices of determinant −1. This component does not form a group, as the product of any two of its elements is of determinant 1, and therefore not an element of the component.

Theorem 1: A transformation is orthogonal if and only if it preserves length and angle.
Let us first show that an orthogonal transformation preserves length and angles. If A is an orthogonal matrix, then its inverse is AT, so ATA = AAT = I, the identity matrix. Now, using the properties of the transpose as well as the property ATA = I, we get ∥A x∥² = AxAx = ATAxx = Ixx = 1xx = xx = ∥x∥² for all vectors x. Note that action of the identity matrix I on vector x is equivalent to multplication f the vector by 1.

TrueQ[A\[ConjugateTranspose] . A == IdentityMatrix[Length[A]]]
True

Let α be the angle between x and y and let β denote the angle between Ax and Ay. Using AxAy = xy again, we get |Ax||Ay| cos(β) = AxAy = xy = |x||y| cos(α). Because |Ax| = |x|, |Ay| = |y|, this means cos(α) = cos(β). As we have defined the angle between two vectors to be a number in [0, π] and cos is monotonic on this interval, it follows that α = β.

To the converse: if A preserves angles and length, then v1 = Ae₁, … , vn = Aen form an orthonormal basis. By looking at B = ATA this shows the off-diagonal entries of B are 0 and diagonal entries of B are 1. The matrix A is orthogonal.

   
Example 2: Here is an orthogonal matrix, which is neither a rotation, nor a reflection. It is an example of a partitioned matrix, a matrix made of matrices. This is a nice way to generate larger matrices with desired properties. The matrix \[ {\bf A} = \begin{bmatrix} \cos (1) & -\sin (1) &0&0 \\ \sin (1) & \cos (1) &0&0 \\ 0&0&\cos (3) & \sin (3) \\ 0&0&\sin (3) &-\sin (3) \end{bmatrix} \] produces a rotation in the xy-plane and a reflection in the zw-plane. It is not a reflection because A² is not the identity. Nor it is a rotation as the determinant is neither 1 nor −1.

Here is how Mathematica constructs a larger matrix from smaller "block" matrices

blk1 = {{Cos[1], -Sin[1]}, {Sin[1], Cos[1]}}; blk2 = Array[0 &, {2, 2}]; blk3 = blk2; blk4 = {{Cos[3], Sin[3]}, {Sin[3], -Sin[3]}}; ArrayFlatten[{{blk1, blk2}, {blk3, blk4}}]
\( \displaystyle \quad \begin{pmatrix} \cos [1] & -\sin [1] &0&0 \\ \sin [1] & \cos [1] & 0&0 \\ 0&0& \cos [3] & \sin [3] \\ 0&0&\sin [3] & -\sin [3] d^2\end{pmatrx} \)
   ■
End of Example 2
Theorem 2: The composition and the inverse of two orthogonal transformations is orthogonal.
The properties of the transpose give (AB)T AB = BT AT AB = BT B = I so that AB is orthogonal if A and B are. The statement about the inverse follows from \[ \left( {\bf A}^{-1} \right)^{\mathrm T} {\bf A}^{-1} = \left( {\bf A}^{\mathrm T} \right)^{-1} {\bf A}^{-1} = \left( {\bf A}\, {\bf A}^{\mathrm T} \right)^{-1} = {\bf I} . \]
   
Example 3:
Clear[A, B, a, b, c, d, e, f, g, h]; orthogonalityConditions = {a^2 + c^2 == 1, ab + cd == 0, b^2 + d^2 == 1}; eqns = Reduce[{a^2 + c^2 == 1, ab + cd == 0, b^2 + d^2 == 1}]; TableForm[%[[#]] & /@ Range[3]]
ab == -cd
\( \displaystyle \quad a = - \sqrt{1 - c^2} \quad |\, | \ a = \sqrt{1-c^2} \)
\( \displaystyle \quad b= - \sqrt{1 - d^2} \quad |\, | \ b = \sqrt{1-d^2} \)
A = {{a, b}, {c, d}}; orthogonalityConditions = {a^2 + c^2 == 1, ab + cd == 0, b^2 + d^2 == 1}; AOrthogonalCheck = Transpose[A] . A == IdentityMatrix[2]; Simplify[AOrthogonalCheck, orthogonalityConditions]
{{1, a b + c d}, {a b + c d, 1}} == {{1, 0}, {0, 1}}
Note from equations above that ab + cd == 0
Simplify[eqns[[1]]]
ab + cd == 0

 

Numeric Example


We consider two rotation matrices that perform rotation by angle &pi/4 and π/3.
Clear[A, B, \[Theta]1, \[Theta]2]; \[Theta]1 = Pi/4;(* 45 degrees *) \[Theta]2 = Pi/3;(* 60 degrees *) A = {{Cos[\[Theta]1], -Sin[\[Theta]1]}, {Sin[\[Theta]1], Cos[\[Theta]1]}}
\( \displaystyle \quad \begin{pmatrix} \frac{1}{\sqrt{2}} & - \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \end{pmatrix} \)
B = {{Cos[\[Theta]2], -Sin[\[Theta]2]}, {Sin[\[Theta]2], Cos[\[Theta]2]}}
\( \displaystyle \quad \begin{pmatrix} \frac{1}{2} & - \frac{\sqrt{3}}{2} \\ \frac{\sqrt{3}}{2} & \frac{1}{2} \end{pmatrix} \)
Transpose[A] . A
\( \displaystyle \quad \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \)
Transpose[B] . B
\( \displaystyle \quad \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \)
Now we work with product of these two matrices .
Transpose[A . B] . (A . B) // FullSimplify
\( \displaystyle \quad \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \)
   ■
End of Example 3

Rotation Matrices

The effect of rotating a vector by θ is the same as the effect obtained by rotation of the frame of reference. When point O on the axis of rotation is chosen, it can be used to define a frame (e₁, e₂, e₃). Let (u₁, u₂, u₃) be the image of the frame under a transformation (rotation in our case). We write every vector ui, i = 1, 2, 3, in coordinates of the original frame:

\[ {\bf u}_1 = \begin{bmatrix} r_{1,1} \\ r_{2,1} \\ r_{3,1} \end{bmatrix} = \begin{bmatrix} {\bf u}_1 \bullet {\bf e}_1 \\ {\bf u}_1 \bullet {\bf e}_2 \\ {\bf u}_1 \bullet {\bf e}_3 \end{bmatrix} , \quad {\bf u}_2 = \begin{bmatrix} r_{1,2} \\ r_{2,2} \\ r_{3,2} \end{bmatrix} = \begin{bmatrix} {\bf u}_2 \bullet {\bf e}_1 \\ {\bf u}_2 \bullet {\bf e}_2 \\ {\bf u}_2 \bullet {\bf e}_3 \end{bmatrix} , \]
and
\[ {\bf u}_3 = \begin{bmatrix} r_{1,3} \\ r_{2,3} \\ r_{3,3} \end{bmatrix} = \begin{bmatrix} {\bf u}_3 \bullet {\bf e}_1 \\ {\bf u}_3 \bullet {\bf e}_2 \\ {\bf u}_3 \bullet {\bf e}_3 \end{bmatrix} . \]
This allows us to construct the rotation matrix
\[ {\bf R} (\theta ) = [ r_{i,j} ] = \begin{bmatrix} {\bf u}_1 & {\bf u}_2 & {\bf u}_3 \end{bmatrix} . \]
So rotating a point is implemented by ordinary matrix multiplication. A rotation matrix is an array of nine numbers. These are subject to the six norm and orthogonality constraints, so only three degrees of freedom are left: if three of the numbers are given, the other six can be computed from these equations. In numerical optimization problems, the redundancy of rotation matrices is inconvenient, and a minimal representation of rotation is often preferable.

Mathematica has its own command for rotation matrices. For instance,

RotationMatrix[\[Theta], {0, 0, 1}]
\( \displaystyle \quad \begin{pmatrix} \cos [\theta ] & -\sin [\theta ] & 0 \\ \sin [\theta ] & \cos [\theta ] & 0 \\ 0&0&1 \end{pmatrix} \)

The simplest such representation is based on Euler’s theorem, stating that every rotation can be described by an axis of rotation and an angle around it. A compact representation of axis and angle is a three-dimensional rotation vector whose direction is the axis and whose magnitude is the angle in radians. The axis is oriented so that the acute-angle rotation is counterclockwise around it. As a consequence, the angle of rotation is always nonnegative, and at most π.

Translation and rotation are the only rigid body transformations. Recall that a rigid body transformation is one that changes the location and orientation of an object, but not its shape. All angles, lengths, areas, and volumes are preserved by rigid body transformations. All rigid body transformations are orthogonal, angle-preserving, invertible, and affine.

Theorem 3: If M is a 3-by-3 real orthogonal matrix with determinant +1, then there is an orthogonal basis of ℝ³ such that M takes the form \[ \begin{bmatrix} 1 & 0 \\ 0 & {\bf R}(\theta ) \end{bmatrix} , \] where R(θ) is a rotation matrix in ℝ².
In higher dimensions, no analogue theorem 3 is valid. In tree dimensions, rotations and special orthogonal matrices are the same.

The characteristic polynomial χM(λ) = det(λIM) has at least one real root because it is a polynomial of degree three. Since matrix M is orthogonal with determinant = +1, its characteristic polynomial χM(λ) = det(λIM) has all eigenvalues on the unit circle in the complex plane ℂ. This follows from the property of orthogonal matrices that preserves lengths of vectors. At least one of the eigenvalues must be λ = 1 because det(M) = 1 (for matrices of odd dimensions). The other two eigenvalues can be either real or complex, but with magnitude = 1. The eigenvector corresponding λ = 1 defines the axis of rotation.

Suppose that quadratic equation \[ \chi_M (\lambda )/(\lambda -1) = 0 \] has two real roots of magnitude 1. Then we have two options for eigenvalues of matrix M to be either (1, 1, 1) or (1, −1, −1). If the equation above has complex roots, then for its value of z and complex conjugate z*, we have \[ \chi_M (\lambda ) = \det (\lambda {\bf I} - {\bf M} ) = \left(\lambda -1 \right) \cdot \left( \lambda - z \right) \cdot \left( \lambda - z^{\ast} \right) . \] To the pair of complex eigenvalues corresponds the eigenspace which is perpendicular to the axis of rotation. Matrix M acts on these eigenvectors as a rotation.

From Euler's rotating theorem (see section), it follows that every spacial rotation in ℝ³ has a rotation axis, which we identify with a unit vector along the line of rotation \( \displaystyle \quad \hat{\bf n} = (n_x , n_y , n_z ). \quad \) Let O be the origin in regular Euclidean space ℝ³ and let θ denote the angle of rotation.We denote by rot(n, θ) the corresponding around axis identified with unit vector n by angle θ.

  • rot(−n, −θ) = rot(n, θ).
  • rot(n, θ + 2kπ) = rot(n, θ) for any integer k.
  • the angle θ can be restricted to θ ∈ [0, π]; this angle may be discontinuous at end points,
  • when θ = 0, the rotation is undetermined.

Fixed Points of Rotation Matrices

For a rotation matrix (or operator) R, a fixed point equation R v = v is actually an eigenvector equation meaning that λ = 1 is an eigenvalue of the rotation matrix. Since det(R) = 1, the rotation matrix in ℝ³ has two other complex conjugate eigenvalues exp(±jϕ), where j is the unit imaginary vector in complex plane ℂ, so j² = −1. The eigenvector of unit length, designated n, defines an axis of rotation. Every vector on this axis is a fixed point for matrix operator R so they are left unchanged by the displacement. The eigenvectors corresponding to the the complex conjugate pair span the plane normal to the axis of rotation.

There exists a basis in ℝ³ so that a rotation matrix has the form

\[ {\bf R} (\theta ) = \begin{bmatrix} 1 & 0 \\ 0 & {\bf R}_{2\times 2} \end{bmatrix} , \]
where R2×2 is a rotation matrix in ℝ².

By Euler’s theorem, any rotation matrix R is fully defined by the pair (ϕ, n), where ϕ ∈ [0, 2 π] is the angle of rotation about the axis with respect to the initial configuration. Since 2 scalar parameters are needed to represent a unit vector in ℝ³, Euler’s theorem implies that a generic rotation tensor can be described with no less than 3 scalar parameters (1 for the rotation angle and 2 for the unit vector along the axis), i.e., the dimension of the manifold of the rotation group SO(3). Any description of rotation employing a set made by 3 scalar parameters is termed minimal, while it is referred as m-redundant when the set employs (3 + m) parameters. In the latter case, m scalar constraints must be enforced to ensure that the parameter set indeed corresponds to an element of SO(3).

In kinematics, Mozzi-Chasles’ fundamental theorem on rigid motion states: “any rigid motion may be represented by a planar rotation about a suitable axis, followed by a uniform translation along that same axis”. The proof that a spatial displacement can be decomposed into a rotation and slide around and along a line is attributed to the astronomer and mathematician Giulio Mozzi (1763), in fact the screw axis is traditionally called asse di Mozzi in Italy. However, most textbooks refer to a subsequent similar work by Michel Chasles dating from 1830.

Mozzi-Chasles’ theorem entails that a generic rigid displacement can be described with no less than 6 scalar parameters (3 for the rotation, 2 for the position of unit vector a, and 1 for the scalar translation), i.e. the dimension of the manifold of the euclidean group SE(&Epf;³). As for rotation, any description of motion employing 6 parameters is termed minimal, while it is referred as redundant when employing more than 6 parameters.

Orientation in 3D

Although orientation is closely related to direction, but it is not exactly the same. The fundamental difference between direction and orientation is seen conceptually by the fact that we can parameterize a direction in 3D with just two numbers (the spherical coordinate angles), whereas an orientation requires a minimum of three numbers. For example, a vector has a direction, but not an orientation---it can be twisted without changing its length and direction.

Twisting vector.

Typically, the orientation is given relative to a frame of reference, usually specified by a Cartesian coordinate system. Two objects sharing the same direction are said to be codirectional (as in parallel lines). Two directions are said to be opposite if they are the additive inverse of one another, as in an arbitrary unit vector and its multiplication by −1. Two directions are obtuse if they form an obtuse angle (greater than a right angle) or, equivalently, if their scalar product or scalar projection is negative.

Orientation cannot be described in absolute terms---you need to chose a frame first. Then an orientation is given by a rotation from some known reference orientation (often called the “identity” or “home” orientation). An example of an orientation is, “Standing upright and facing east.” The amount of rotation is known as an angular displacement. In other words, describing an orientation is mathematically equivalent to describing an angular displacement. You might also hear the word “attitude” used to refer the orientation of an object, especially if that object is an aircraft.

  1. Show that GLn(𝔽) is not a vector space.
  2. Let \[ \mathbf{A} = \begin{bmatrix} \cos\theta & \sin\theta \\ \sin\theta & -\cos\theta \end{bmatrix} , \quad \mathbf{v} = \begin{bmatrix} 1 + \cos\theta \\ \sin\theta \end{bmatrix} , \quad \mathbf{x} = \begin{bmatrix} \sin\theta \\ 1 - \cos\theta \end{bmatrix} . \] Show that A v is a scalar multiple of v and that A x is a scalar multiple of x.
  3. Which of the following matrices is orthogonal? \[ ({\bf a})\quad \begin{bmatrix} 1&1&1&1 \\ 1&-1&1&-1 \\ 1&1&-1&-1 \\ 1&-1&-1&1 \end{bmatrix} , \qquad ({\bf b})\quad \begin{bmatrix} 0&0&1&0 \\ 0&-1&0&0 \\ -1&0&0&0 \\ 0&0&0&1 \end{bmatrix} , \] \[ ({\bf c})\quad \begin{bmatrix} 1&0&0 \\ 0&-1&1 \\ 0&0&-1 \end{bmatrix} , \qquad ({\bf d})\quad \begin{bmatrix} \cos (3) & -\sin (3) &0&0 \\ \sin (3) & \cos (3) &0&0 \\ 0&0& \cos (1) &\sin (1) \\ 0&0&\sin (1) & -\cos (1) \end{bmatrix} , \]
  4. For any real numbers p, q, r, and s, matrices of the form \( \displaystyle \quad \begin{bmatrix} p&-q&-r&-s \\ q&p&s&-r \\ r&-s&p&q \\ s&r&-q&p \end{bmatrix} . \quad \) are called quaternions. Find a basis for the set of quaternion matrices.

  1. Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International
  2. Dunn, F. and Parberry, I. (2002). 3D math primer for graphics and game development. Plano, Tex.: Wordware Pub.
  3. Foley, James D.; van Dam, Andries; Feiner, Steven K.; Hughes, John F. (1991), Computer Graphics: Principles and Practice (2nd ed.), Reading: Addison-Wesley, ISBN 0-201-12110-7
  4. Matrices and Linear Transformations
  5. Rogers, D.F., Adams, J. A., Mathematical Elements for Computer Graphics, McGraw-Hill Science/Engineering/Math, 1989.
  6. Shuster, M. D., A survey of attitude representations, Journal of the Astronautical Sciences 41(4) 439–517 (1993).
  7. Watt, A., 3D Computer Graphics, Addison-Wesley; 3rd edition, 1999.