es

Dot Product

The dot product of two vectors of the same size \( {\bf x} = \left[ x_1 , x_2 , \ldots , x_n \right] \) and \( {\bf y} = \left[ y_1 , y_2 , \ldots , y_n \right] \) (regardless of whether they are columns or rows or n-tuples) is the number, denoted by xy, \[ {\bf x} \bullet {\bf y} = x_1 y_1 + x_2 y_2 + \cdots + x_n y_n , \] when entries are real, or \[ {\bf x} \bullet {\bf y} = \overline{x_1} y_1 + \overline{x_2} y_2 + \cdots + \overline{x_n} y_n , \]

when entries are complex. Here \( \overline{\bf x} = \overline{a + {\bf j}\, b} = a - {\bf j}\,b = {\bf x}^{\ast} {\bf y} \) is a complex conjugate of a complex number x = a + jb.

The dot product is also known as the scalar product and it is a particular case of the inner product. Therefore, it is sometimes denoted as ⟨x , y⟩ in order to unify it with the inner product. Note that Mathematica does not distinguish rows from columns. We indicate the dot product by a large solid dot •, as above; this is not standard mathematical notation, but it is common in physics and engineering. The dot product is not defined for vectors of different dimensions.

Josiah Gibbs
The dot product was first introduced by the American physicist and mathematician Josiah Willard Gibbs (1839--1903) in the 1880s.

The following basic properties of the dot product are important. They are all easily proven from the above definition. In the following properties, u, , and w are n-dimensional vectors, and λ is a number (scalar):

  • \( {\bf u} \bullet {\bf v} = {\bf v}^{\ast} \bullet {\bf u} = \overline{{\bf v}} \bullet {\bf u} ; \)
  • (u + v) • w = uw + vw;       (distributive)
  • (λ u) • v = λ (uv).
Another obvious but important property is that the dot product of vector u = [u1, u2, … , un] v with the i-th unit vector ei is equal to i-th coordinate of u.

 

Geometric Properties of the Dot Product


Geometrical analysis yields further interesting properties of the dot product operation that can then be used in nongeometric applications. This takes a little work.

Consider a fixed two-dimensional coordinate system with origin at point O. Let P = (px, py) and Q = (qx, qy) be two arbitrary points on the plane ℝ². When Euclidean norm ‖·‖2 is emplyed, we can define the distance from the origin to any point on the plane. For example, the distance from the origin to point P is

\[ \left\| \vec{OP} \right\| = \sqrt{p_x^2 + p_y^2} . \]

 

Dot Product and Linear Transformations


The fundamental significance of a dot product is that it is a linear transforma- tion of vectors. This means that the function f(v) = uv is a linear functional for any fixed vector u. The following famous theorem (proved independently by Frigyes Riesz and Maurice René Fréchet in 1907) establishes the converse: Any linear functional T(v) corresponds to the dot product with a weight vector u.
Riesz representation theorem: Let f be a linear form on n-dimensional vector space V. Then there exists a unique vector u such that for all vV, f(v) = uv

 

Application of the Dot Product: Weighted Sum


We can use the dot product to find the angle between two vectors. From the definition of the dot product, we get

\[ {\bf a} \cdot {\bf b} = \langle {\bf a} , {\bf b} \rangle = \| {\bf a} \| \cdot \| {\bf b} \| \,\cos \theta , \]
where θ is the angle between ywo vectors a and b. If the vectors are nonzero, then
\[ \theta = \arccos \left( \frac{{\bf a} \cdot {\bf b}}{\| {\bf a} \| \cdot \| {\bf b} \| } \right) . \]

The prime example of dot operation is work that is defined as the scalar product of force and displacement. The presence of cos(θ) ensures the requirement that the work done by a force perpendicular to the displacement is zero.

The dot product is clearly commutative, 𝑎 · b = b · 𝑎. Moreover, it distributes over vector addition

\[ ({\bf a} + {\bf b}) · {\bf c} = {\bf a} · {\bf c} + {\bf b} · {\bf c}. \]

One can use the distributive property of the dot product to show that if (ax, ay, az) and (bx, by, bz) represent the components of a and b along the axes x, y, and z, then

\[ {\bf a} \cdot {\bf b} = a_x b_x + a_y b_y + a_z b_z . \]
From the definition of the dot product, we can draw an important conclusion. If we divide both sides of a · b = |a| |b| cos θ by |a|, we get
\[ \frac{{\bf a} \cdot {\bf b}}{|{\bf a}|} = |{\bf b}|\,\cos\theta \qquad \iff \qquad \left( \frac{{\bf a}}{|{\bf a}|} \right) \cdot {\bf b} = \hat{\bf e}_a \cdot {\bf b} = |{\bf b}|\,\cos\theta \]
Noting that |b| cos θ is simply the projection of b along a, we conclude that in order to find the perpendicular projection of a vector b along another vector a, take dot product of b with \( \hat{\bf e}_a , \) the unit vector along a.

The dot product of any two vectors of the same dimension can be done with the dot operation given as Dot[vector 1, vector 2] or with use of a period “. “ .

{1,2,3}.{2,4,6}
28
Dot[{1,2,3},{3,2,1} ]
10
Willard Gibbs
With Euclidean norm ‖·‖2, the dot product formula
\[ {\bf x} \cdot {\bf y} = \| {\bf x} \|_2 \, \| {\bf y} \|_2 \, \cos \theta , \]
defines θ, the angle between two vectors. The dot product was first introduced by the American physicist and mathematician Josiah Willard Gibbs (1839--1903) in the 1880s. ■

An inner product of two vectors of the same size, usually denoted by \( \left\langle {\bf x} , {\bf y} \right\rangle ,\) is a generalization of the dot product if it satisfies the following properties:

  • \( \left\langle {\bf v}+{\bf u} , {\bf w} \right\rangle = \left\langle {\bf v} , {\bf w} \right\rangle + \left\langle {\bf u} , {\bf w} \right\rangle . \)
  • \( \left\langle {\bf v} , \alpha {\bf u} \right\rangle = \alpha \left\langle {\bf v} , {\bf u} \right\rangle \) for any scalar α.
  • \( \left\langle {\bf v} , {\bf u} \right\rangle = \overline{\left\langle {\bf u} , {\bf v} \right\rangle} , \) where overline means complex conjugate.
  • \( \left\langle {\bf v} , {\bf v} \right\rangle \ge 0 , \) and equal if and only if \( {\bf v} = {\bf 0} . \)

The fourth condition in the list above is known as the positive-definite condition. A vector space together with the inner product is called an inner product space. Every inner product space is a metric space. The metric or norm is given by

\[ \| {\bf u} \| = \sqrt{\left\langle {\bf u} , {\bf u} \right\rangle} . \]
The nonzero vectors u and v of the same size are orthogonal (or perpendicular) when their inner product is zero: \( \left\langle {\bf u} , {\bf v} \right\rangle = 0 . \) We abbreviate it as \( {\bf u} \perp {\bf v} . \)

If A is an n × n positive definite matrix and u and v are n-vectors, then we can define the weighted Euclidean inner product

\[ \left\langle {\bf u} , {\bf v} \right\rangle = {\bf A} {\bf u} \cdot {\bf v} = {\bf u} \cdot {\bf A}^{\ast} {\bf v} \qquad\mbox{and} \qquad {\bf u} \cdot {\bf A} {\bf v} = {\bf A}^{\ast} {\bf u} \cdot {\bf v} . \]
In particular, if w1, w2, ... , wn are positive real numbers, which are called weights, and if u = ( u1, u2, ... , un) and v = ( v1, v2, ... , vn) are vectors in ℝn, then the formula
\[ \left\langle {\bf u} , {\bf v} \right\rangle = w_1 u_1 v_1 + w_2 u_2 v_2 + \cdots + w_n u_n v_n \]
defines an inner product on \( \mathbb{R}^n , \) that is called the weighted Euclidean inner product with weights w1, w2, ... , wn.
Example 4: The Euclidean inner product and the weighted Euclidean inner product (when \( \left\langle {\bf u} , {\bf v} \right\rangle = \sum_{k=1}^n a_k u_k v_k , \) for some positive numbers \( a_k , \ (k=1,2,\ldots , n \) ) are special cases of a general class of inner products on \( \mathbb{R}^n \) called matrix inner product. Let A be an invertible n-by-n matrix. Then the formula
\[ \left\langle {\bf u} , {\bf v} \right\rangle = {\bf A} {\bf u} \cdot {\bf A} {\bf v} = {\bf v}^{\mathrm T} {\bf A}^{\mathrm T} {\bf A} {\bf u} \]
defines an inner product generated by A.

Example 5: In the set of integrable functions on an interval [a,b], we can define the inner product of two functions f and g as
\[ \left\langle f , g \right\rangle = \int_a^b \overline{f} (x)\, g(x) \, {\text d}x \qquad\mbox{or} \qquad \left\langle f , g \right\rangle = \int_a^b f(x)\,\overline{g} (x) \, {\text d}x . \]
Then the norm \( \| f \| \) (also called the 2-norm or 𝔏² norm) becomes the square root of
\[ \| f \|^2 = \left\langle f , f \right\rangle = \int_a^b \left\vert f(x) \right\vert^2 \, {\text d}x . \]
In particular, the 2-norm of the function \( f(x) = 5x^2 +2x -1 \) on the interval [0,1] is
\[ \| 2 x^2 +2x -1 \| = \sqrt{\int_0^1 \left( 5x^2 +2x -1 \right)^2 {\text d}x } = \sqrt{7} . \]
Example 6: Consider a set of polynomials of degree n. If
\[ {\bf p} = p(x) = p_0 + p_1 x + p_2 x^2 + \cdots + p_n x^n \quad\mbox{and} \quad {\bf q} = q(x) = q_0 + q_1 x + q_2 x^2 + \cdots + q_n x^n \]
are two polynomials, and if \( x_0 , x_1 , \ldots , x_n \) are distinct real numbers (called sample points), then the formula
\[ \left\langle {\bf p} , {\bf q} \right\rangle = p(x_0 ) q(x_0 ) + p_1 (x_1 )q(x_1 ) + \cdots + p(x_n ) q(x_n ) \]
defines an inner product, which is called the evaluation inner product at \( x_0 , x_1 , \ldots , x_n . \)

The invention of Cartesian coordinates in 1649 by René Descartes (Latinized name: Cartesius) revolutionized mathematics by providing the first systematic link between Euclidean geometry and algebra.

Example 7: What is the angle between i and i + j + 2k?
\begin{align*} \theta &= \arccos \left( \frac{{\bf i} \cdot ({\bf i} + {\bf j} + 2 {\bf k})}{\| {\bf i} \| \cdot \| {\bf i} + {\bf j} + 2 {\bf k} \| } \right) \\ &= \arccos \left( \frac{1}{\sqrt{6}} \right) \approx 1.15026. \end{align*}
========================== to be checked ===============

The dot product of two vectors of the same size \( {\bf x} = \left[ x_1 , x_2 , \ldots , x_n \right] \) and \( {\bf y} = \left[ y_1 , y_2 , \ldots , y_n \right] \) (independently whether they are columns or rows) is the number, denoted either by \( {\bf x} \cdot {\bf y} \) or \( \left\langle {\bf x} , {\bf y} \right\rangle ,\)
\[ \left\langle {\bf x} , {\bf y} \right\rangle = {\bf x} \cdot {\bf y} = x_1 y_1 + x_2 y_2 + \cdots + x_n y_n , \]
when entries are real, or
\[ \left\langle {\bf x} , {\bf y} \right\rangle = {\bf x} \cdot {\bf y} = \overline{x_1} y_1 + \overline{x_2} y_2 + \cdots + \overline{x_n} y_n , \]
when entries are complex.

An outer product is the tensor product of two coordinate vectors \( {\bf u} = \left[ u_1 , u_2 , \ldots , u_m \right] \) and \( {\bf v} = \left[ v_1 , v_2 , \ldots , v_n \right] , \) denoted \( {\bf u} \otimes {\bf v} , \) is an m-by-n matrix W such that its coordinates satisfy \( w_{i,j} = u_i v_j . \) The outer product \( {\bf u} \otimes {\bf v} , \) is equivalent to a matrix multiplication \( {\bf u} \, {\bf v}^{\ast} , \) (or \( {\bf u} \, {\bf v}^{\mathrm T} , \) if vectors are real) provided that u is represented as a column \( m \times 1 \) vector, and v as a column \( n \times 1 \) vector. Here \( {\bf v}^{\ast} = \overline{{\bf v}^{\mathrm T}} . \)

For three dimensional vectors \( {\bf a} = a_1 \,{\bf i} + a_2 \,{\bf j} + a_3 \,{\bf k} = \left[ a_1 , a_2 , a_3 \right] \) and \( {\bf b} = b_1 \,{\bf i} + b_2 \,{\bf j} + b_3 \,{\bf k} = \left[ b_1 , b_2 , b_3 \right] \) , it is possible to define special multiplication, called cross-product:
\[ {\bf a} \times {\bf b} = \det \left[ \begin{array}{ccc} {\bf i} & {\bf j} & {\bf k} \\ a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \end{array} \right] = {\bf i} \left( a_2 b_3 - b_2 a_3 \right) - {\bf j} \left( a_1 b_3 - b_1 a_3 \right) + {\bf k} \left( a_1 b_2 - a_2 b_1 \right) . \]
Example: For instance, if m = 4 and n = 3, then
\[ {\bf u} \otimes {\bf v} = {\bf u} \, {\bf v}^{\mathrm T} = \begin{bmatrix} u_1 \\ u_2 \\ u_3 \\ u_4 \end{bmatrix} \begin{bmatrix} v_1 & v_2 & v_3 \end{bmatrix} = \begin{bmatrix} u_1 v_1 & u_1 v_2 & u_1 v_3 \\ u_2 v_1 & u_2 v_2 & u_2 v_3 \\ u_3 v_1 & u_3 v_2 & u_3 v_3 \\ u_4 v_1 & u_4 v_2 & u_4 v_3 \end{bmatrix} . \]
In Mathematica, the outer product has a special command:
Outer[Times, {1, 2, 3, 4}, {a, b, c}]
Out[1]= {{a, b, c}, {2 a, 2 b, 2 c}, {3 a, 3 b, 3 c}, {4 a, 4 b, 4 c}}

An inner product of two vectors of the same size, usually denoted by \( \left\langle {\bf x} , {\bf y} \right\rangle ,\) is a generalization of the dot product if it satisfies the following properties:

  • \( \left\langle {\bf v}+{\bf u} , {\bf w} \right\rangle = \left\langle {\bf v} , {\bf w} \right\rangle + \left\langle {\bf u} , {\bf w} \right\rangle . \)
  • \( \left\langle {\bf v} , \alpha {\bf u} \right\rangle = \alpha \left\langle {\bf v} , {\bf u} \right\rangle \) for any scalar α.
  • \( \left\langle {\bf v} , {\bf u} \right\rangle = \overline{\left\langle {\bf u} , {\bf v} \right\rangle} , \) where overline means complex conjugate.
  • \( \left\langle {\bf v} , {\bf v} \right\rangle \ge 0 , \) and equal if and only if \( {\bf v} = {\bf 0} . \)

The fourth condition in the list above is known as the positive-definite condition.

A vector space together with the inner product is called an inner product space. Every inner product space is a metric space. The metric or norm is given by

\[ \| {\bf u} \| = \sqrt{\left\langle {\bf u} , {\bf u} \right\rangle} . \]
The nonzero vectors u and v of the same size are orthogonal (or perpendicular) when their inner product is zero: \( \left\langle {\bf u} , {\bf v} \right\rangle = 0 . \) We abbreviate it as \( {\bf u} \perp {\bf v} . \)

A generalized length function on a vector space can be imposed in many different ways, not necessarily through the inner product. What is important that this generalized length, called in mathematics a norm, should satisfy the following four axioms.

A norm on a vector space V is a nonnegative function \( \| \, \cdot \, \| \, : \, V \to [0, \infty ) \) that satisfies the following axioms for any vectors \( {\bf u}, {\bf v} \in V \) and arbitrary scalar k.
  1. \( \| {\bf u} \| \) is real and nonnegative;
  2. \( \| {\bf u} \| =0 \) if and only if u = 0;
  3. \( \| k\,{\bf u} \| = |k| \, \| {\bf u} \| ;\)
  4. \( \| {\bf u} + {\bf v} \| \le \| {\bf u} \| + \| {\bf v} \| . \)
With any positive definite (having positive eigenvalues) matrix one can define a corresponding norm. If A is an \( n \times n \) positive definite matrix and u and v are n-vectors, then we can define the weighted Euclidean inner product
\[ \left\langle {\bf u} , {\bf v} \right\rangle = {\bf A} {\bf u} \cdot {\bf v} = {\bf u} \cdot {\bf A}^{\ast} {\bf v} \qquad\mbox{and} \qquad {\bf u} \cdot {\bf A} {\bf v} = {\bf A}^{\ast} {\bf u} \cdot {\bf v} . \]
In particular, if w1, w2, ... , wn are positive real numbers, which are called weights, and if u = ( u1, u2, ... , un) and v = ( v1, v2, ... , vn) are vectors in \( \mathbb{R}^n , \) then the formula
\[ \left\langle {\bf u} , {\bf v} \right\rangle = w_1 u_1 v_1 + w_2 u_2 v_2 + \cdots + w_n u_n v_n \]
defines an inner product on \( \mathbb{R}^n , \) that is called the weighted Euclidean inner product with weights w1, w2, ... , wn.
Example: The Euclidean inner product and the weighted Euclidean inner product (when \( \left\langle {\bf u} , {\bf v} \right\rangle = \sum_{k=1}^n a_k u_k v_k , \) for some positive numbers \( a_k , \ (k=1,2,\ldots , n \) ) are special cases of a general class of inner products on \( \mathbb{R}^n \) called matrix inner product. Let A be an invertible n-by-n matrix. Then the formula
\[ \left\langle {\bf u} , {\bf v} \right\rangle = {\bf A} {\bf u} \cdot {\bf A} {\bf v} = {\bf v}^{\mathrm T} {\bf A}^{\mathrm T} {\bf A} {\bf u} \]
defines an inner product generated by A.
Example: In the set of integrable functions on an interval [a,b], we can define the inner product of two functions f and g as
\[ \left\langle f , g \right\rangle = \int_a^b \overline{f} (x)\, g(x) \, {\text d}x \qquad\mbox{or} \qquad \left\langle f , g \right\rangle = \int_a^b f(x)\,\overline{g} (x) \, {\text d}x . \]
Then the norm \( \| f \| \) (also called the 2-norm) becomes the square root of
\[ \| f \|^2 = \left\langle f , f \right\rangle = \int_a^b \left\vert f(x) \right\vert^2 \, {\text d}x . \]
In particular, the 2-norm of the function \( f(x) = 5x^2 +2x -1 \) on the interval [0,1] is
\[ \| 2 x^2 +2x -1 \| = \sqrt{\int_0^1 \left( 5x^2 +2x -1 \right)^2 {\text d}x } = \sqrt{7} . \]

Example: Consider a set of polynomials of degree n. If
\[ {\bf p} = p(x) = p_0 + p_1 x + p_2 x^2 + \cdots + p_n x^n \quad\mbox{and} \quad {\bf q} = q(x) = q_0 + q_1 x + q_2 x^2 + \cdots + q_n x^n \]
are two polynomials, and if \( x_0 , x_1 , \ldots , x_n \) are distinct real numbers (called sample points), then the formula
\[ \left\langle {\bf p} , {\bf q} \right\rangle = p(x_0 ) q(x_0 ) + p_1 (x_1 )q(x_1 ) + \cdots + p(x_n ) q(x_n ) \]
defines an inner product, which is called the evaluation inner product at \( x_0 , x_1 , \ldots , x_n . \)

With dot product, we can assign a length of a vector, which is also called the Euclidean norm or 2-norm:

\[ \| {\bf x} \|_2 = \| {\bf x} \| = \sqrt{ {\bf x}\cdot {\bf x}} = \sqrt{x_1^2 + x_2^2 + \cdots + x_n^2} . \]
This norm can be generalized for arbitrary real p: \[ \| {\bf x} \|_p = \left( x_1^p + x_2^p + \cdots + x_n^p \right)^{1/p} . \]

Example: Taking a vector \( {\bf v} = \left( 2, {\bf j} , -2 \right) , \) we calculate norms:
Norm[{2, \[ImaginaryJ], -2}]
Norm[{2, \[ImaginaryJ], -2}, 3/2]
Out[1]= 3
Out[2]= (1 + 4 Sqrt[2])^(2/3)
In linear algebra, functional analysis, and related areas of mathematics, a norm is a function that assigns a strictly positive length or size to each vector in a vector space—save for the zero vector, which is assigned a length of zero. On an n-dimensional complex space \( \mathbb{C}^n ,\) the most common norm is
\[ \| {\bf z} \| = \sqrt{ {\bf z}\cdot {\bf z}} = \sqrt{\overline{z_1} \,z_1 + \overline{z_2}\,z_2 + \cdots + \overline{z_n}\,z_n} = \sqrt{|z_1|^2 + |z_2 |^2 + \cdots + |z_n |^2} . \]
A unit vector u is a vector whose length equals one: \( {\bf u} \cdot {\bf u} =1 . \) We say that two vectors x and y are perpendicular if their dot product is zero. There are known many other norms.
         
 Augustin-Louis Cauchy    Viktor Yakovlevich Bunyakovsky    Hermann Amandus Schwarz
For any norm, the Cauchy--Bunyakovsky--Schwarz (or simply CBS) inequality holds:
\[ | {\bf x} \cdot {\bf y} | \le \| {\bf x} \| \, \| {\bf y} \| . \]
The inequality for sums was published by the French mathematician and physicist Augustin-Louis Cauchy (1789--1857) in 1821, while the corresponding inequality for integrals was first proved by the Russian mathematician Viktor Yakovlevich Bunyakovsky (1804--1889) in 1859. The modern proof (which is actually a repetition of the Bunyakovsky's one) of the integral inequality was given by the German mathematician Hermann Amandus Schwarz (1843--1921) in 1888. With Euclidean norm, we can define the dot product as
\[ {\bf x} \cdot {\bf y} = \| {\bf x} \| \, \| {\bf y} \| \, \cos \theta , \]
where \( \theta \) is the angle between two vectors. ■

Applications in Physics

Vector and scalar products are intimately associated with a variety of physical concepts. For example, the work done by a force applied at a point is defined as the product of the displacement and the component of the force in the direction of displacement (i.e., the projection of the force onto the direction of the displacement). Thus the component of the force perpendicular to the displacement "does no work." If F is the force and s the displacement, then the work W is by definition equal to
\[ W = F_{\parallel} s = F\,s\,\cos\left( {\bf F}, {\bf s} \right) = {\bf F} \bullet {\bf s} . \]
Suppose the force makes an obtuse angle with the displacement, so that the force is "resistive." Then the work is regarded as negative, in keeping with formula above.

 

  1. What is the angle between the vectors i + j and i + 3j?
  2. What is the area of the quadrilateral with vertices at (1, 1), (4, 2), (3, 7) and (2, 3)?
  1. Vector addition
  2. Deay, T. and Manogue, C.A., he Geometry of the Dot and Cross Products, Journal of Online Mathematics and Its Applications 6.