Tensors in ℝ³

Mathematicians initially used calculus to study physical problems in terms of coordinate systems, and only later they imagined how to extend these local surveys to global, coordinate-independent (i.e. geometric) coordinates. The main tool in this approach was definition of vectors because they can be used inedependently on coordinate systems. Tensors were an extension of vectors imposed by physics to describe multilinear relationship between sets of algebraic objects related to a vector space. While the vectors expressed isotropic descriptions by specifying a size and a particular direction for some point, the tensors give a magnitude for each direction from this point. Thus, the scalar and the vector fields are tensor fields.

A classic example of tensors comes from the description of pressure and tension at each point of a distorted elastic medium (see next section). For the study of lateral deformations, a total of nine components are required at each point.

In this section, we consider a three-dimensional real space ℝ³ where every point has three coordinates expressed as a linear conbination (due to any coordinate system not necessarily orthogonal):

\[ {\bf x} = a_1 x^1 + a_2 x^2 + a_3 x^3 = \sum_{i=1}^3 a_i x^i , \]
for which we also use Einstein notation x = 𝑎ixi. The index of summation is often referred to as a dummy index. Suppose the three variables x¹, x², x³ are transformed into a new set, ξ¹, ξ². ξ³ through the following linear transformation
\begin{align*} \xi^1 &= C_1^1 x^1 + C_2^1 x^2 + C_3^1 x^3 , \\ \xi^2 &= C_1^2 x^1 + C_2^2 x^2 + C_3^3 x^3 , \\ \xi^3 &= C_1^3 x^1 + C_2^3 x^2 + C_3^3 x^3 , \end{align*}
where Cij are constants. This system of equations can be written in matrix/vector form:
\[ \begin{bmatrix} \xi^1 \\ \xi^2 \\ \xi^3 \end{bmatrix} = \begin{bmatrix} C_1^1 & C_2^1 & C_3^1 \\ C_1^2 & C_2^2 & C_3^2 \\ C_1^3 & C_2^3 & C_3^3 \end{bmatrix} \begin{pmatrix} x^1 \\ x^2 \\ x^3 \end{pmatrix} . \]
Using the summation convention these equations are collapsed in one type
\begin{equation} \label{EqTensor.1} \xi^i = C^i_j x^j , \qquad i,j = 1,2,3 . \end{equation}
The matrix [Cij] is called a matrix of the transformation of \eqref{EqTensor.1}. Its determinant det [Cij] = |[Cij]| is assumed to be nonzero. We have
\begin{equation} \label{EqTensor.2} C_j^i = \frac{\partial \xi^i}{\partial x^j} , \qquad i,j = 1,2,3. \end{equation}
Since matrix [Cij] is invertible, the transformation (1) can be reversed:
\begin{equation} \label{EqTensor.3} \begin{bmatrix} x^1 \\ x^2 \\ x^3 \end{bmatrix} = \begin{bmatrix} c_1^1 & c_2^1 & c_3^1 \\ c_1^2 & c_2^2 & c_3^2 \\ c_1^3 & c_2^3 & c_3^3 \end{bmatrix} \begin{pmatrix} \xi^1 \\ \xi^2 \\ \xi^3 \end{pmatrix} . \end{equation}
where the matrix [cij] is the inverse of [Cij]. Therefore,
\begin{equation} \label{EqTensor.4} C^r_m c^m_s = \delta^r_s . \end{equation}
Transformation \eqref{EqTensor.3} can be written as
\begin{equation} \label{EqTensor.5} x^i = c^i_j \xi^j . \end{equation}
Then we have
\begin{equation} \label{EqTensor.6} c^i_j = \frac{\partial x^i}{\partial \xi^j} . \end{equation}
and
\begin{equation} \label{EqTensor.7} \det \left[ C^i_j \right] \cdot \det \left[ c^j_i \right] = 1. \end{equation}
Recall that the inverse of matrix A is expressed through cofactor matrix:
\[ {\bf A}^{-1} = \frac{1}{\det{\bf A}} \left( \Delta \right)^{\mathrm T} , \]
where "T" stands for transformation. These seven relationships give the mathematics of linear transformation.

 

Orthogonal transformations


The group of invertible transformations (or matrices), called the general linear group GLn, has a very important subgroup On of orthogonal transformations or matrices.

Tensors of first order

Let us consider three functions
\[ \alpha^1 \left( x^1 , x^2 , x^3 \right) , \quad \alpha^2 \left( x^1 , x^2 , x^3 \right) , \quad \alpha^3 \left( x^1 , x^2 , x^3 \right) , \]
that we considered as components of 3-vector. This vector is called a contravariant vector or an contraviatiant tensor of first order if and only if, they are transformed according to the linear law
\[ a^r = \frac{\partial \xi^r}{\partial x^k}\, \alpha^k = C_k^r \alpha^k , \]
when independent variables are transfered as Eq.\eqref{EqTensor.1}, ξi = Cij xj. In matrix form, it can be written as
\[ \begin{pmatrix} a^1 \\ a^2 \\ a^3 \end{pmatrix} = \begin{bmatrix} C_1^1 & C_2^1 & C_3^1 \\ C_1^2 & C_2^2 & C_3^2 \\ C_1^3 & C_2^3 & C_3^3 \end{bmatrix} \begin{pmatrix} \alpha^1 \\ \alpha^2 \\ \alpha^3 \end{pmatrix} . \]

Contravariant vectors describe quantities where distance units are located in the numerator (such as speed), while covariant ones are those in which the distance units are in the denominator (such as gradient of a function).

  1. Borisenko, A.I., Tarapov, I.E., Vector and Tensor Analysis with Applications, Translated by R. Silverman. Dover Publications. Available at: https://www.perlego.com/book/110945/vector-and-tensor-analysis-with-applications-pdf (Accessed: 14 October 2022).
  2. McConnell, A.J., Applications of Tensor Analysis, Dover Publications, 2016,
  3. Coburn, N,, Vector and Tensor Analysis, The Macmillan Company; First Edition, 1955.
  4. Wrede, R.C., Introduction to Vector and Tensor Analysis, Dover Publications; First Edition, 1972.
  5. Struik, D.J., Lectures on Classical Differential Geometry: Second Edition, Dover Publications, 1988.