The dot product of two vectors of the same size
\( {\bf x} = \left[ x_1 , x_2 , \ldots , x_n \right] \) and
\( {\bf y} = \left[ y_1 , y_2 , \ldots , y_n
\right] \) (regardless of whether they are columns or rows or n-tuples) is the number,
denoted by x • y,
\[
{\bf x} \bullet {\bf y} = x_1 y_1 + x_2 y_2 + \cdots + x_n y_n ,
\]
when entries are real, or
\[
{\bf x} \bullet {\bf y} = \overline{x_1} y_1 + \overline{x_2} y_2 + \cdots + \overline{x_n} y_n ,
\]
when entries are complex.
Here \( \overline{\bf x} = \overline{a + {\bf j}\, b} =
a - {\bf j}\,b = {\bf x}^{\ast} {\bf y} \) is a complex conjugate of a complex number
x = a + jb.
The dot product is also known as the scalar product and it is a particular case of the inner product. Therefore, it is sometimes denoted as 〈x , y〉 in order to unify it with the inner product. Note that Mathematica does not distinguish rows from columns.
We indicate the dot product by a large solid dot •, as above; this is not standard mathematical notation, but it is common in physics and engineering. The dot product is not defined for vectors of different dimensions.
The dot product was first introduced by the American physicist and mathematician Josiah Willard Gibbs (1839--1903) in the 1880s.
The following basic properties of the dot product are important. They are all
easily proven from the above definition. In the following properties, u, , and w are n-dimensional vectors, and λ is a number (scalar):
Another obvious but important property is that the dot product of vector u = [u1, u2, … , un]
v
with the i-th unit vector ei is equal to i-th coordinate of u.
Geometric Properties of the Dot Product
Geometrical analysis yields further interesting properties of the dot product
operation that can then be used in nongeometric applications. This takes a
little work.
Consider a fixed two-dimensional coordinate system with origin at point O. Let P = (px, py) and Q = (qx, qy) be two arbitrary points on the plane ℝ². When Euclidean norm ‖·‖2 is emplyed, we can define the distance from the origin to any point on the plane. For example, the distance from the origin to point P is
The fundamental significance of a dot product is that it is a linear transforma-
tion of vectors. This means that the function f(v) = u • v is a linear functional for any fixed vector u. The following famous theorem (proved independently by Frigyes Riesz and Maurice René Fréchet in 1907) establishes the converse: Any linear functional T(v)
corresponds to the dot product with a weight vector u.
Riesz representation theorem:
Let f be a linear form on n-dimensional vector space V. Then there exists a unique vector u such that for all v ∈ V, f(v) = u • v
Application of the Dot Product: Weighted Sum
We can use the dot product to find the angle between two vectors. From the definition of the dot product, we get
The prime example of dot operation is work that is defined as the scalar product of force and
displacement. The presence of cos(θ) ensures the requirement that the work
done by a force perpendicular to the displacement is zero.
The dot product is clearly commutative, 𝑎 · b = b · 𝑎. Moreover, it distributes over vector addition
One can use the distributive property of the dot product to show that
if (ax, ay, az) and (bx, by, bz) represent the components of a and b along the
axes x, y, and z, then
Noting that |b| cos θ is simply the projection of b along a, we conclude that in order to find the perpendicular projection of a vector b along another vector a, take dot product of b with \( \hat{\bf e}_a , \)
the unit vector along a.
The dot product of any two vectors of the same dimension can be done with the dot operation given as Dot[vector 1, vector 2] or with use of a period “. “ .
{1,2,3}.{2,4,6}
28
Dot[{1,2,3},{3,2,1} ]
10
With Euclidean norm ‖·‖2, the dot product formula
defines θ, the angle between two vectors.
The dot product was first introduced by the American physicist and
mathematician Josiah Willard Gibbs (1839--1903) in the 1880s. ■
An inner product of two vectors of the same size, usually denoted by \( \left\langle {\bf x} , {\bf y} \right\rangle ,\) is a generalization of the dot product if it satisfies the following properties:
\( \left\langle {\bf v} , {\bf v} \right\rangle \ge 0 , \) and equal if and only if
\( {\bf v} = {\bf 0} . \)
The fourth condition in the list above is known as the positive-definite condition. A vector space together with the inner product is called an inner product space. Every inner product space is a metric space. The metric or norm is given by
The nonzero vectors u and v of the same size are orthogonal (or perpendicular) when their inner product is zero:
\( \left\langle {\bf u} , {\bf v} \right\rangle = 0 . \) We abbreviate it as \( {\bf u} \perp {\bf v} . \)
If A is an n × npositive definite matrix and u and v are n-vectors, then we can define the weighted Euclidean inner product
In particular, if w1, w2, ... , wn are positive real numbers,
which are called weights, and if u = ( u1, u2, ... , un) and
v = ( v1, v2, ... , vn) are vectors in ℝn, then the formula
defines an inner product on \( \mathbb{R}^n , \) that is called the weighted Euclidean inner product with weights
w1, w2, ... , wn.
Example 4:
The Euclidean inner product and the weighted Euclidean inner product (when \( \left\langle {\bf u} , {\bf v} \right\rangle = \sum_{k=1}^n a_k u_k v_k , \)
for some positive numbers \( a_k , \ (k=1,2,\ldots , n \) ) are special cases of a general class
of inner products on \( \mathbb{R}^n \) called matrix inner product. Let A be an
invertible n-by-n matrix. Then the formula
defines an inner product, which is called the evaluation inner product at \( x_0 , x_1 , \ldots , x_n . \) ■
The invention of Cartesian coordinates in 1649 by René Descartes (Latinized name: Cartesius) revolutionized mathematics by providing the first systematic link between Euclidean geometry and algebra.
Example 7:
What is the angle between i and i + j + 2k?
========================== to be checked ===============
The dot product of two vectors of the same size
\( {\bf x} = \left[ x_1 , x_2 , \ldots , x_n \right] \) and
\( {\bf y} = \left[ y_1 , y_2 , \ldots , y_n \right] \) (independently whether they are columns or rows) is the number,
denoted either by \( {\bf x} \cdot {\bf y} \) or \( \left\langle {\bf x} , {\bf y} \right\rangle ,\)
In Mathematica, the outer product has a special command:
Outer[Times, {1, 2, 3, 4}, {a, b, c}]
Out[1]= {{a, b, c}, {2 a, 2 b, 2 c}, {3 a, 3 b, 3 c}, {4 a, 4 b, 4 c}}
An inner product of two vectors of the same size, usually denoted by \( \left\langle {\bf x} , {\bf y} \right\rangle ,\) is a generalization of the dot product if it satisfies the following properties:
\( \left\langle {\bf v} , {\bf v} \right\rangle \ge 0 , \) and equal if and only if
\( {\bf v} = {\bf 0} . \)
The fourth condition in the list above is known as the positive-definite condition.
A vector space together with the inner product is called an inner product space. Every inner product space is a metric space. The metric or norm is given by
The nonzero vectors u and v of the same size are orthogonal (or perpendicular) when their inner product is zero:
\( \left\langle {\bf u} , {\bf v} \right\rangle = 0 . \) We abbreviate it as \( {\bf u} \perp {\bf v} . \)
A generalized length function on a vector space can be imposed in many different ways, not necessarily through the
inner product. What is important that this generalized length, called in mathematics a norm, should satisfy the
following four axioms.
A norm on a vector space V is a nonnegative function
\( \| \, \cdot \, \| \, : \, V \to [0, \infty ) \) that satisfies the following axioms for
any vectors \( {\bf u}, {\bf v} \in V \) and arbitrary scalar k.
With any positive definite (having positive eigenvalues) matrix one can define a corresponding norm.
If A is an \( n \times n \) positive definite matrix and u and v are n-vectors, then we can define the weighted Euclidean inner product
In particular, if w1, w2, ... , wn are positive real numbers,
which are called weights, and if u = ( u1, u2, ... , un) and
v = ( v1, v2, ... , vn) are vectors in
\( \mathbb{R}^n , \) then the formula
defines an inner product on \( \mathbb{R}^n , \) that is called the weighted Euclidean inner product with weights
w1, w2, ... , wn.
Example:
The Euclidean inner product and the weighted Euclidean inner product (when \( \left\langle {\bf u} , {\bf v} \right\rangle = \sum_{k=1}^n a_k u_k v_k , \)
for some positive numbers \( a_k , \ (k=1,2,\ldots , n \) ) are special cases of a general class
of inner products on \( \mathbb{R}^n \) called matrix inner product. Let A be an
invertible n-by-n matrix. Then the formula
In linear algebra, functional analysis, and related areas of mathematics, a norm is a function that assigns a strictly positive length or size to each vector in a vector space—save for the zero vector, which is assigned a length of zero.
On an n-dimensional complex space \( \mathbb{C}^n ,\) the most common norm is
A unit vector u is a vector whose length equals one: \( {\bf u} \cdot {\bf u} =1 . \) We say that two vectors
x and y are perpendicular if their dot product is zero.
There are known many other norms.
The inequality for sums was published by the French mathematician and physicist Augustin-Louis Cauchy (1789--1857) in 1821, while the corresponding
inequality for integrals was first proved by the Russian mathematician Viktor Yakovlevich Bunyakovsky (1804--1889) in 1859. The modern proof
(which is actually a repetition of the Bunyakovsky's one) of the integral inequality was given by the German mathematician Hermann Amandus Schwarz (1843--1921) in 1888.
With Euclidean norm, we can define the dot product as
where \( \theta \) is the angle between two vectors. ■
Applications in Physics
Vector and scalar products are intimately
associated with a variety of physical concepts. For example, the work done
by a force applied at a point is defined as the product of the displacement
and the component of the force in the direction of displacement (i.e., the projection of the force onto the direction of the displacement). Thus the
component of the force perpendicular to the displacement "does no work." If
F is the force and s the displacement, then the work W> is by definition equal to
\[
W = F_{\parallel} s = F\,s\,\cos\left( {\bf F}, {\bf s} \right) = {\bf F} \bullet {\bf s} .
\]
Suppose the force makes an obtuse angle with the displacement, so that the
force is "resistive." Then the work is regarded as negative, in keeping with
formula above.
What is the angle between the vectors i + j and i + 3j?
What is the area of the quadrilateral with vertices at (1, 1), (4, 2), (3, 7) and (2, 3)?