which we prefer to write in succinct form a • x = b, where a = (𝑎₁, 𝑎₂, … , 𝑎n), x = (x₁, x₂, … , xn), and b = (b₁, b₂, … , bn) are numerical vectors from 𝔽n.
The dot product or scalar product of two vectors of the same size
\( {\bf x} = \left[ x_1 , x_2 , \ldots , x_n \right] \) and
\( {\bf y} = \left[ y_1 , y_2 , \ldots , y_n
\right] \) (regardless of whether they are columns or rows or n-tuples) is the number,
denoted by x • y,
\begin{equation} \label{EqDot.1}
{\bf x} \bullet {\bf y} = x_1 y_1 + x_2 y_2 + \cdots + x_n y_n .
\end{equation}
The dot product is not defined for vectors of different dimensions. This definition is valid not only for n-tuples (elements from 𝔽n), but also for column vectors and row vectors.
Mathematica does not distinguish rows from columns. Dot product can be accomplished with two Mathematica commands:
a = {1, 2, 3}; b = {3, 2, 1};
Dot[a, b]
a . b
The dot product was first introduced by the American physicist and mathematician Josiah Willard Gibbs (1839--1903) in the 1880s.
Many years before Gibbs definition, ancient Greeks discovered that geometrically the product of the corresponding entries of the two sequences of numbers is equivalent to the product of their magnitudes and the cosine of the angle between them. This leads to introducing a etric (or length or distance) in the Cartesian product ℝ³ transferring it into the Euclidean space.
Originally, it was the three-dimensional physical space, but in modern mathematics there are Euclidean spaces of any positive integer dimension n, which are called Euclidean n-spaces.
At the beginning of twentieth century, it was discovered that the dot product is needed for definition of dual spaces (see section in Part3). Then left-hand part of Eq.(1) defines a linear functional on any n dimensional vector space independently on what field is used (ℂ or ℝ).
Then one of the multipliers, say x in Eq.(1), is called a vector, but another counterpart, y is known as a covector. Such treatment of vectors in the dot product breaks their equal rights; in many practical problems, these vectors, x and y, are indeed different, but sometimes look the same.
In geometry, to distinguish these two partners in Eq. (1), the vector x is called contravariant vector, and the covector y is referred to as covariablt vector. In order to decide between these partners who is who, it is common to use supersript for coordinates of contravariant vector, x = [ x¹, x², x³], and subscript for covariant vectors, y = [y₁, y₂, y₃]. In physics, covariant vectors are also called bra-vectors, while contravariant vectors are known as ket-vectors.
However, a vector space, by definition, has no metric inside it, which is very desirable property. It turns out that the scalar product can be used to define length or distance between vectors transfering ℝn iinto a metric space, known as the Euclidean space.
In 1912, the Hungarian mathematician Frigyes Riesz established an isomorphism between an Euclidean space and its dual space. His result (which is also valid for some infinite dimensional spaces) restores equal right between vectors and covectors, but under a new marriage sectificate---known as the inner product, which is our next topic to discuss.
The following basic properties of the dot product are important. They are all
easily proven from the above definition. In the following properties, u, , and w are n-dimensional vectors, and λ is a number (scalar):
u • v = v • u
(commutative law)
(u + v) • w = u • w + v • w; (distributive law)
(λ u) • v = λ (u • v).
Example 1:
■
End of Example 1
Euclidean Space
The invention of Cartesian coordinates in 1649 by René Descartes (Latinized name: Cartesius) revolutionized mathematics by providing the first systematic link between Euclidean geometry and algebra.
Example 2:
■
End of Example 2
Geometric Properties of the Dot Product
Geometrical analysis yields further interesting properties of the dot product
operation that can then be used in nongeometric applications. This takes a
little work.
Consider a fixed two-dimensional coordinate system with origin at point O. Let P = (px, py) and Q = (qx, qy) be two arbitrary points on the plane ℝ². When Euclidean norm ‖·‖2 is employed, we can define the distance from the origin to any point on the plane. For example, the distance from the origin to point P is
The fundamental significance of a dot product is that it is a linear transformtion of vectors. This means that the function f(v) = u • v is a linear functional for any fixed vector u. The following famous theorem (proved independently by Frigyes Riesz and Maurice René Fréchet in 1907) establishes the converse: Any linear functional T(v)
corresponds to the dot product with a weight vector u.
Riesz representation theorem:
Let f be a linear form on n-dimensional vector space V. Then there exists a unique vector u such that for all v ∈ V, f(v) = u • v
Application of the Dot Product: Weighted Sum
The dot product is very important in physics. Let us consider an example. In
classical mechanics it is true that the ‘work’ that is done when an object is moved
equals the dot product of the force acting on the object and the displacement vector:
\[
F = \mathbf{F} \bullet \mathbf{x} .
\]
The work W must of course be independent of the coordinate system in which the vectors F and x are expressed. The dot product as we know it from Eq.(1) does not have this property. In general, using matrix transformation, we have
Only if A−1 equals AT
(i.e. if we are dealing
with orthonormal transformations) s will not change.
It appears as if the dot product only describes the
physics correctly in a special kind of coordinate system: a system which according to
our human perception is ‘rectangular’, and has physical units, i.e. a distance of 1 in
coordinate x
means indeed 1 meter in x-direction. An orthonormal transformation
produces again such a rectangular ‘physical’ coordinate system. If one has so far
always employed such special coordinates anyway, this dot product has always
worked properly.
It is not always guaranteed that one can use such special coordinate systems (polar coordinates are an example in which the local orthonormal basis of vectors is not the coordinate basis). However, the dot product between a vector x and a covector y is invariant
under all transformations because this product defines a functional generated by covector y. Then the given dot product is just one representation of this linear functional in particular coordinates. Making linear transformation with matrix A, we get
The prime example of dot operation is work that is defined as the scalar product of force and
displacement. The presence of cos(θ) ensures the requirement that the work
done by a force perpendicular to the displacement is zero.
The dot product is clearly commutative, 𝑎 · b = b · 𝑎. Moreover, it distributes over vector addition
One can use the distributive property of the dot product to show that
if (ax, ay, az) and (bx, by, bz) represent the components of a and b along the
axes x, y, and z, then
Noting that |b| cos θ is simply the projection of b along a, we conclude that in order to find the perpendicular projection of a vector b along another vector a, take dot product of b with \( \hat{\bf e}_a , \)
the unit vector along a.
The dot product of any two vectors of the same dimension can be done with the dot operation given as Dot[vector 1, vector 2] or with use of a period “. “ .
{1,2,3}.{2,4,6}
28
Dot[{1,2,3},{3,2,1} ]
10
With Euclidean norm ‖·‖2, the dot product formula
defines θ, the angle between two vectors.
The dot product was first introduced by the American physicist and
mathematician Josiah Willard Gibbs (1839--1903) in the 1880s. ■
An inner product of two vectors of the same size, usually denoted by \( \left\langle {\bf x} , {\bf y} \right\rangle ,\) is a generalization of the dot product if it satisfies the following properties:
\( \left\langle {\bf v} , {\bf v} \right\rangle \ge 0 , \) and equal if and only if
\( {\bf v} = {\bf 0} . \)
The fourth condition in the list above is known as the positive-definite condition. A vector space together with the inner product is called an inner product space. Every inner product space is a metric space. The metric or norm is given by
The nonzero vectors u and v of the same size are orthogonal (or perpendicular) when their inner product is zero:
\( \left\langle {\bf u} , {\bf v} \right\rangle = 0 . \) We abbreviate it as \( {\bf u} \perp {\bf v} . \)
If A is an n × npositive definite matrix and u and v are n-vectors, then we can define the weighted Euclidean inner product
In particular, if w1, w2, ... , wn are positive real numbers,
which are called weights, and if u = ( u1, u2, ... , un) and
v = ( v1, v2, ... , vn) are vectors in ℝn, then the formula
defines an inner product on \( \mathbb{R}^n , \) that is called the weighted Euclidean inner product with weights
w1, w2, ... , wn.
Example 4:
The Euclidean inner product and the weighted Euclidean inner product (when \( \left\langle {\bf u} , {\bf v} \right\rangle = \sum_{k=1}^n a_k u_k v_k , \)
for some positive numbers \( a_k , \ (k=1,2,\ldots , n \) ) are special cases of a general class
of inner products on \( \mathbb{R}^n \) called matrix inner product. Let A be an
invertible n-by-n matrix. Then the formula
========================== to be checked ===============
The dot product of two vectors of the same size
\( {\bf x} = \left[ x_1 , x_2 , \ldots , x_n \right] \) and
\( {\bf y} = \left[ y_1 , y_2 , \ldots , y_n \right] \) (independently whether they are columns or rows) is the number,
denoted either by \( {\bf x} \cdot {\bf y} \) or \( \left\langle {\bf x} , {\bf y} \right\rangle ,\)
In Mathematica, the outer product has a special command:
Outer[Times, {1, 2, 3, 4}, {a, b, c}]
Out[1]= {{a, b, c}, {2 a, 2 b, 2 c}, {3 a, 3 b, 3 c}, {4 a, 4 b, 4 c}}
Applications in Physics
Vector and scalar products are intimately
associated with a variety of physical concepts. For example, the work done
by a force applied at a point is defined as the product of the displacement
and the component of the force in the direction of displacement (i.e., the projection of the force onto the direction of the displacement). Thus the
component of the force perpendicular to the displacement "does no work." If
F is the force and s the displacement, then the work W> is by definition equal to
\[
W = F_{\parallel} s = F\,s\,\cos\left( {\bf F}, {\bf s} \right) = {\bf F} \bullet {\bf s} .
\]
Suppose the force makes an obtuse angle with the displacement, so that the
force is "resistive." Then the work is regarded as negative, in keeping with
formula above.
What is the angle between the vectors i + j and i + 3j?
What is the area of the quadrilateral with vertices at (1, 1), (4, 2), (3, 7) and (2, 3)?