es

The Wolfram Mathematica notebook which contains the code that produces all the Mathematica output in this web page may be downloaded at this link.

We denote by 𝔽 one of the following four fields: ℤ, a set of integers, ℚ, a set of rational numbers, ℝ, a set of real numbers, and ℂ, a set of complex numbers. However, norm definition involves only two of them, either ℝ or ℂ.

Dot Product

We met many times in previous sections a special linear combination of numerical vectors. For instance, a linear equation in n unknowns
\begin{equation} \label{EqDot.1} a_1 x_1 + a_2 x_2 + \cdots + a_n x_n = b , \end{equation}
which we prefer to write in succinct form ax = b, where a = (𝑎₁, 𝑎₂, … , 𝑎n), x = (x₁, x₂, … , xn), and b = (b₁, b₂, … , bn) are numerical vectors from 𝔽n. Another widely used application of this peculiar linear combination is observed in multiplications of matrices. The following definition holds for the Cartesian form of the vectors only.
Let V and U be two vector spaces of the same finite dimension and over the same field of scalars 𝔽. Then these vectors spaces are isomorphic to 𝔽n, so every vector can be uniquely identifies by n-tuple of numbers. The dot product or scalar product of two vectors of the same size \( {\bf x} = \left[ x_1 , x_2 , \ldots , x_n \right] \) and \( {\bf y} = \left[ y_1 , y_2 , \ldots , y_n \right] \) is the number from field 𝔽, denoted by xy, \begin{equation} \label{EqDot.2} {\bf x} \bullet {\bf y} = x_1 y_1 + x_2 y_2 + \cdots + x_n y_n . \end{equation}
Remark:    Recall that two vector spaces V and U are isomorphic (denoted VU) if there is a bijective linear map between them. This bijection (which is one-to-one and onto mapping) can be achieved by considering ordered bases α = [ a₁, a₂, … , an ] and β = [ b₁, b₂, … , bn ] in these vector spaces V and U, respectively. Then components of every vector with respect to a chosen basis can be identified uniquely with an n-tuple. Therefore, the algebraic formula \eqref{EqDot.2} is essentially applied to two copies of the Cartesian product 𝔽n. Geometric interpretation of the dot product, which is coordinate independent and therefore conveys invariant properties of these products, is given in the Euclidean space section.

Note:    The definition of the dot product does not restrict of applying it to two distinct isomorphic versions of the Cartesian product 𝔽n. So you can find the dot product of a row vector with a column vector. However, we avoid writing it as matrix multiplication,

\[ \left[ x_1 , x_2 , \ldots , x_n \right] \begin{pmatrix} y_1 \\ y_2 \\ \vdots \\ y_n \end{pmatrix} = \left[ {\bf x} \bullet {\bf y} \right] \in \mathbb{F}^{1 \times 1} , \]
because the right-hand side is a 1×1 matrix that a computer solver always treat differently from a scalar.    ▣

Mathematica does not distinguish rows from columns. Dot product can be accomplished with two Mathematica commands:

a = {1, 2, 3}; b = {3, 2, 1};
Dot[a, b]
a . b
Josiah Gibbs
The dot product was first introduced by the American physicist and mathematician Josiah Willard Gibbs (1839--1903) in the 1880s. Initially, the scalar product appeared in a pamphlet distributed to his students at Yale University. Gibbs's pamphlet was eventually incorporated into a book entitled Vector Analysis that was published in 1901 and coauthored with one of his students.

At the beginning of twentieth century, it was discovered that the dot product is needed for definition of dual spaces (see section in Part3). Then left-hand part of Eq.\eqref{EqDot.1} defines a linear functional on any n dimensional vector space independently on what field is used (ℂ or ℝ). Then one of the multipliers, say y in Eq.\eqref{EqDot.2}, is called a vector, but another counterpart, x is known as a covector. Such treatment of vectors in the dot product breaks their "equal "rights;" in many practical problems, these vectors, x and y, are indeed different, but sometimes look the same.

In geometry, to distinguish these two partners in Eq.\eqref{EqDot.2}, the vector y is called contravariant vector, and the covector x is referred to as covariant vector. In order to decide between these partners who is who, it is common to use superscript for coordinates of contravariant vector, y = [ y¹, y², y³], and subscript for covariant vectors, x = [x₁, x₂, x₃]. In physics, covariant vectors are also called bra-vectors, while contravariant vectors are known as ket-vectors.

   
Example 1:    ■
End of Example 1

 

Properties of dot product


The dot product is not defined for vectors of different dimensions. It does not matter whether vectors are columns or rows or n-tuples. so you can evaluate dot product of row vectors with column vectors---they must be from the vector spaces over the same field. Therefore, this definition is valid not only for n-tuples (elements from 𝔽n), but also for column vectors and row vectors.

The following basic properties of the dot product are important. They are all easily proven from the above definition. In the following properties, u, , and w are n-dimensional vectors, and λ is a number (scalar):

  1. uv = vu       (commutative law)
  2. (u + v) • w = uw + vw;       (distributive law)
  3. (λ u) • v = λ (uv).
  4. (uv)² ≤ (uu) · (vv)     (Caushy inequality).
  5. uAv = ATuv,     where AT is the transpose of a square matrix A.

Note:    The inequality in part (4) was first proved by the French mathematician, engineer, and physicist baron Augustin-Louis Cauchy in 1821. In 1859, Viktor Bunyakovsky extended this inequality to its integral version (that is, for the case where we have continuous summation). 30 years later, in 1888, Hermann Schwarz established the general form of this inequality, valid in vector spaces endowed with dot product. Therefore, this inequality is usually referred to as Cauchy–Schwarz inequality, also Cauchy–Bunyakovsky–Schwarz inequality, or simply CBS-inequality.

         
 Augustin-Louis Cauchy    Viktor Yakovlevich Bunyakovsky    Hermann Amandus Schwarz

  1. It is convenient to introduce the following notation:     ∥v∥² = vv. Positive square root of this quantity is call the norm in mathematics. Then Cauchy inequality can be rewritten as \[ \left\vert {\bf u} \bullet {\bf v} \right\vert \le \| {\bf u} \| \cdot \| {\bf v} \| , \] Suppose first that either u or v is zero. Then their dot product is zero and the Caushy inequality holds.

    Now suppose that neither u nor v is zero. It follows that ∥u∥ > 0 and ∥v∥ > 0 because the dot product xx > 0 for any nonzero vector x. We have \begin{align*} 0 &\le \left( \frac{{\bf u}}{\| {\bf u} \|} + \frac{{\bf v}}{\| {\bf v} \|} \right) \bullet \left( \frac{{\bf u}}{\| {\bf u} \|} + \frac{{\bf v}}{\| {\bf v} \|} \right) \\ &= \left( \frac{{\bf u}}{\| {\bf u} \|} \bullet \frac{{\bf u}}{\| {\bf u} \|} \right) + 2 \left( \frac{{\bf u}}{\| {\bf u} \|} \bullet \frac{{\bf v}}{\| {\bf v} \|} \right) + \left( \frac{{\bf v}}{\| {\bf v} \|} \bullet \frac{{\bf v}}{\| {\bf v} \|} \right) \\ &= \frac{1}{\| {\bf u} \|^2} \left( {\bf u} \bullet {\bf u} \right) + \frac{2}{\| {\bf u} \| \cdot \| {\bf v} \|} \left( {\bf u} \bullet {\bf v} \right) + \frac{1}{\| {\bf v} \|^2} \left( {\bf v} \bullet {\bf v} \right) \\ &= \frac{1}{\| {\bf u} \|^2} \, \| {\bf u} \|^2 + 2 \left( \frac{{\bf u}}{\| {\bf u} \|} \bullet \frac{{\bf v}}{\| {\bf v} \|} \right) + \frac{1}{\| {\bf v} \|^2} \, \| {\bf v} \|^2 \\ &= 1 + 2 \left( \frac{{\bf u}}{\| {\bf u} \|} \bullet \frac{{\bf v}}{\| {\bf v} \|} \right) + 1 \end{align*} Hence,     −∥u∥ · ∥v∥ ≤ uv. Similarly, \begin{align*} 0 &\le \left( \frac{{\bf u}}{\| {\bf u} \|} - \frac{{\bf v}}{\| {\bf v} \|} \right) \bullet \left( \frac{{\bf u}}{\| {\bf u} \|} - \frac{{\bf v}}{\| {\bf v} \|} \right) \\ &= \left( \frac{{\bf u}}{\| {\bf u} \|} \bullet \frac{{\bf u}}{\| {\bf u} \|} \right) - 2 \left( \frac{{\bf u}}{\| {\bf u} \|} \bullet \frac{{\bf v}}{\| {\bf v} \|} \right) + \left( \frac{{\bf v}}{\| {\bf v} \|} \bullet \frac{{\bf v}}{\| {\bf v} \|} \right) \\ &= \frac{1}{\| {\bf u} \|^2} \left( {\bf u} \bullet {\bf u} \right) - \frac{2}{\| {\bf u} \| \cdot \| {\bf v} \|} \left( {\bf u} \bullet {\bf v} \right) + \frac{1}{\| {\bf v} \|^2} \left( {\bf v} \bullet {\bf v} \right) \\ &= \frac{1}{\| {\bf u} \|^2} \, \| {\bf u} \|^2 - 2 \left( \frac{{\bf u}}{\| {\bf u} \|} \bullet \frac{{\bf v}}{\| {\bf v} \|} \right) + \frac{1}{\| {\bf v} \|^2} \, \| {\bf v} \|^2 \\ &= 1 - 2 \left( \frac{{\bf u}}{\| {\bf u} \|} \bullet \frac{{\bf v}}{\| {\bf v} \|} \right) + 1 \end{align*} Therefore,     uv ≤ ∥u∥ · ∥v∥. By combining the two inequalities, we obtain the Caushy inequality.

A vector space, by definition, has no metric inside it, which is very desirable property. It turns out that the scalar product can be used to define length or distance between vectors transferring ℝn into a metric space, known as the Euclidean space. Upon introducing the norm (meaning length or magnitude) of a vector, \( \displaystyle \quad \| {\bf v} \| = +\sqrt{{\bf v} \bullet {\bf v}} , \quad \) the Cauchy inequality can be written as

\begin{equation} \label{EqDot.3} \left| \mathbf{u} \bullet \mathbf{v} \right\vert \leqslant \| \mathbf{u} \| \cdot \| \mathbf{v} \| . \end{equation}
   
Example 2:    ■
End of Example 2

 

Geometric Properties of the Dot Product


Many years before Gibbs definition, ancient Greeks discovered that geometrically the product of the corresponding entries of the two sequences of numbers is equivalent to the product of their magnitudes and the cosine of the angle between them. This leads to introducing a metric (or length or distance) in the Cartesian product ℝ³ transferring it into the Euclidean space. Originally, it was the three-dimensional physical space, but in modern mathematics there are Euclidean spaces of any positive integer dimension n, which are called Euclidean n-spaces.

Geometrical analysis yields further interesting properties of the dot product operation that can then be used in nongeometric applications. If we rewrite the Cauchy inequality as an equality with parameter k:

\[ \mathbf{u} \bullet \mathbf{v} = \| \mathbf{u} \| \cdot \| \mathbf{v} \| \, k \qquad (0 \le k \le 1) , \]
it was discovered ny ancient Greeks that the parameter k has a geometric meaning in physical space (ℝ³ or ℝ²). This leads to the equation
\begin{equation} \label{EqDot.4} \mathbf{u} \bullet \mathbf{v} = \| \mathbf{u} \| \cdot \| \mathbf{v} \| \, \cos\theta , \end{equation}
where θ is the angle between vectors u and v.

 

Dot Product and Linear Transformations


The fundamental significance of a dot product is that it is a linear transformation of vectors. This means that the function f(v) = uv is a linear functional for any fixed vector u.

 

Dot product in coordinate systems


The concepts of angle and radius were already used by ancient Greek astronomer and astrologer Hipparchus (190–120 BC). Although, Grégoire de Saint-Vincent and Bonaventura Cavalieri independently introduced the system's concepts in the mid-17th century, though the actual term polar coordinates has been attributed to Gregorio Fontana in the 18th century.

The polar coordinate system specifies a given point P(x, y) in a plane by using a distance r and an angle θ as its two coordinates (r, θ), where r is the point's distance from a reference point called the pole, and θ is the point's direction from the pole relative to the direction of the abscissa. The distance r from the pole is called the radial coordinate, radial distance or simply radius, and the angle θ is called the angular coordinate, polar angle, or azimuth.

The polar coordinates r and θ can be converted to the Cartesian coordinates x and y by using the trigonometric functions sine and cosine or complex numbers:

\[ \begin{split} x & = r\,\cos\theta , \\ y &= r\,\sin\theta , \end{split} \qquad\mbox{or}\qquad z = x + {\bf j}\,y = r\,e^{{\bf j}\,\theta} . \]
It makes no sense to define dot product in polar coordinates similar to the Cartesian coordinates:
\[ \left( r_1 , \theta_1 \right) \bullet \left( r_2 , \theta_2 \right) \ne r_1 r_2 + \theta_1 \theta_2 \]
because products of components isn't even dimensionally correct – the radial coordinates are dimensional while the angles are dimensionless. Upon introducing the radius vector \( \displaystyle \quad \mathbf{r} = r\,\hat{\bf r}(\theta ), \quad \) where \( \displaystyle \quad \hat{\bf r}(\theta ) = \cos\theta\, \hat{\bf x} + \sin\theta\, \hat{\bf y} = \mathbf{i}\,\cos\theta + \mathbf{j}\,\sin\theta . \quad \) Then scalar product in polar coordinates becomes
\[ r_1 e^{{\bf j}\,\theta_1} \bullet r_2 e^{{\bf j}\,\theta_2} = r_1 r_2 \left( \cos\theta_1 \cos\theta_2 + \sin\theta_1 \sin\theta_2 \right) = r_1 r_2 \cos\left( \theta_1 - \theta_2 \right) . \]
({1,0}*Cos[theta1] + {0,1}*Sin[theta1]).({1,0}*Cos[theta2] + {0, 1}*Sin[theta2])

The polar coordinate system is extended to three dimensions in two ways: the cylindrical coordinate system adds a second distance coordinate, and the spherical coordinate system adds a second angular coordinate.

\[ x = \]
The radius vector of a point in space with spherical coordinates ρ,𝜃,𝜙 can be written as
\[ \mathbf{r} = \rho\,\hat{\bf r} (\theta , \phi ) , \]
where
\[ \hat{\bf r} (\theta , \phi ) = \sin\phi\,\cos\theta\,\hat{\bf x} + \sin\phi\,\sin\theta\,\hat{\bf y} + \cos\phi\,\hat{\bf z} . \]
Thus, the components of the radius vector with respect to the "spherical basis" form a vector field because they vary from point to point. Moreover, the radius vector has coordinates (ρ, 0, 0) because θ and ϕ have no physical dimension, and cannot be the components of a vector.

When ρ₁, θ₁, ϕ₁ and ρ₂, θ₂, ϕ₂ are known for two vectors u and v, we have

\[ \mathbf{u} = \rho_1 \hat{\bf r} (\theta_1 , \phi_1 ) \qquad \mbox{and} \qquad \mathbf{v} = \rho_2 \hat{\bf r} (\theta_2 , \phi_2 ) . \]
Their dot product is
\begin{align*} \mathbf{u} \bullet \mathbf{v} &= \left[ \rho_1 \hat{\bf r} (\theta_1 , \phi_1 ) \right] \bullet \left[ \rho_2 \hat{\bf r} (\theta_2 , \phi_2 ) \right] \\ &= \rho_1 \rho_2 \hat{\bf r} (\theta_1 , \phi_1 ) \bullet \hat{\bf r} (\theta_2 , \phi_2 ) \\ &= \rho_1 \rho_2 \left( \sin\phi_1 \cos\theta_1 \hat{\bf x} + \sin\phi_1 \sin\theta_1 \hat{\bf y} + \cos\phi_1 \hat{\bf z} \right) \bullet \left( \sin\phi_2 \cos\theta_2 \hat{\bf x} + \sin\phi_2 \sin\theta_2 \hat{\bf y} + \cos\phi_2 \hat{\bf z} \right) \\ &= \rho_1 \rho_2 \left( \sin\phi_1 \sin\phi_2 \cos\theta_1 \cos\theta_2 + \sin\phi_1 \sin\phi_2 \sin\theta_1 \sin \theta_2 + \cos\phi_1 \cos\phi_2 \right) \\ &= \rho_1 \rho_2 \left[ \sin\phi_1 \sin\phi_2 \cos \left( \theta_1 - \theta_2 \right) + \cos\phi_1 \cos\phi_2 \right] . \end{align*}
If we introduce the angle ω by
\[ \cos\omega = \sin\phi_1 \sin\phi_2 \cos \left( \theta_1 - \theta_2 \right) + \cos\phi_1 \cos\phi_2 , \]
then equation \eqref{EqDot.4} becomes
\[ \mathbf{u} \bullet \mathbf{v} = \| \mathbf{u} \| \cdot \| \mathbf{v} \| \,\cos\omega . \]

Applications

Scalar products are intimately associated with a variety of physical concepts. For example, the work done by a force applied at a point is defined as the product of the displacement and the component of the force in the direction of displacement (i.e., the projection of the force onto the direction of the displacement). Thus the component of the force perpendicular to the displacement "does no work." If F is the force and s is the displacement, then the work W is by definition equal to
\[ W = F_{\parallel} s = F\,s\,\cos\left( {\bf F}, {\bf s} \right) = {\bf F} \bullet {\bf s} . \]
Suppose the force makes an obtuse angle with the displacement, so that the force is "resisting." Then the work is regarded as negative, in keeping with formula above.

The dot product is very important in physics. Let us consider an example. In classical mechanics, it is true that the ‘work’ that is done when an object is moved equals the dot product of the force acting on the object and the displacement vector:

\[ F = \mathbf{F} \bullet \mathbf{x} . \]
   
Example 2: There are many physical examples of line integrals, but perhaps the most common is the expression for the total work done by a force F when it moves its point of application from a point A to a point B along a given curve C. We allow the magnitude and direction of F to vary along the curve. Let the force act at a point r and consider a small displacement dr along the curve; then the small amount of work done is dW = F • dr (note that dW can be either positive or negative). Therefore, the total work done in traversing the path C is \[ W_C = \int_C {\bf F} \bullet {\text d}{\bf r} . \]

Naturally, other physical quantities can be expressed in such a way. For example, the electrostatic potential energy gained by moving a charge q along a path C in an electric field E is     −qC E • dr. We may also note that Ampère’s law concerning the magnetic field B associated with a current-carrying wire can be written as \[ \oint_C {\bf B} \bull {\text d}{bf r} = \mu_0 I , \] where I is the current enclosed by a closed path C traversed in a right-handed sense with respect to the current direction.    ■

End of Example 2
The work W must of course be independent of the coordinate system in which the vectors F and x are expressed. The dot product as we know it from Eq.\eqref{EqDot.3} does not have this property. In general, using matrix transformation, we have
\[ s = {\bf A}\,\mathbf{x} \bullet {\bf A}\,\mathbf{y} = {\bf A}^{\mathrm T} {\bf A}\,\mathbf{x} \bullet \mathbf{y} . \]
Only if A−1 equals AT (i.e. if we are dealing with orthonormal transformations) s will not change. It appears as if the dot product only describes the physics correctly in a special kind of coordinate system: a system which according to our human perception is ‘rectangular’, and has physical units, i.e. a distance of 1 in coordinate x means indeed 1 meter in x-direction. An orthonormal transformation produces again such a rectangular ‘physical’ coordinate system. If one has so far always employed such special coordinates anyway, this dot product has always worked properly.

It is not always guaranteed that one can use such special coordinate systems (polar coordinates are an example in which the local orthonormal basis of vectors is not the coordinate basis). However, the dot product between a vector x and a covector y is invariant under all transformations because this product defines a functional generated by covector y. Then the given dot product is just one representation of this linear functional in particular coordinates. Making linear transformation with matrix A, we get

\begin{align*} \mathbf{x} \bullet \mathbf{y} &= \sum_i x^i y_i = \sum_i \sum_j A^i_j \xi^j \bullet \left( \mathbf{A}^{-1} \right)^j_i \eta_j \\ &= \sum_j \sum_i \left( \mathbf{A}^{-1} \right)^j_i A^i_j \xi^j \bullet \eta_j = \sum_j \xi^j \bullet \eta_j . \end{align*}

We can use the dot product to find the angle between two vectors. From the definition of the dot product, we get

\[ {\bf a} \cdot {\bf b} = \langle {\bf a} , {\bf b} \rangle = \| {\bf a} \| \cdot \| {\bf b} \| \,\cos \theta , \]
where θ is the angle between ywo vectors a and b. If the vectors are nonzero, then
\[ \theta = \arccos \left( \frac{{\bf a} \cdot {\bf b}}{\| {\bf a} \| \cdot \| {\bf b} \| } \right) . \]

The prime example of dot operation is work that is defined as the scalar product of force and displacement. The presence of cos(θ) ensures the requirement that the work done by a force perpendicular to the displacement is zero.

The dot product is clearly commutative, 𝑎 · b = b · 𝑎. Moreover, it distributes over vector addition

\[ ({\bf a} + {\bf b}) · {\bf c} = {\bf a} · {\bf c} + {\bf b} · {\bf c}. \]

One can use the distributive property of the dot product to show that if (ax, ay, az) and (bx, by, bz) represent the components of a and b along the axes x, y, and z, then

\[ {\bf a} \cdot {\bf b} = a_x b_x + a_y b_y + a_z b_z . \]
From the definition of the dot product, we can draw an important conclusion. If we divide both sides of a · b = |a| |b| cos θ by |a|, we get
\[ \frac{{\bf a} \cdot {\bf b}}{|{\bf a}|} = |{\bf b}|\,\cos\theta \qquad \iff \qquad \left( \frac{{\bf a}}{|{\bf a}|} \right) \cdot {\bf b} = \hat{\bf e}_a \cdot {\bf b} = |{\bf b}|\,\cos\theta \]
Noting that |b| cos θ is simply the projection of b along a, we conclude that in order to find the perpendicular projection of a vector b along another vector a, take dot product of b with \( \hat{\bf e}_a , \) the unit vector along a.

The dot product of any two vectors of the same dimension can be done with the dot operation given as Dot[vector 1, vector 2] or with use of a period “. “ .

{1,2,3}.{2,4,6}
28
Dot[{1,2,3},{3,2,1} ]
10
Example 7: What is the angle between i and i + j + 2k?
\begin{align*} \theta &= \arccos \left( \frac{{\bf i} \cdot ({\bf i} + {\bf j} + 2 {\bf k})}{\| {\bf i} \| \cdot \| {\bf i} + {\bf j} + 2 {\bf k} \| } \right) \\ &= \arccos \left( \frac{1}{\sqrt{6}} \right) \approx 1.15026. \end{align*}
   ■
End of Example 7
========================== to be checked ===============

An outer product is the tensor product of two coordinate vectors \( {\bf u} = \left[ u_1 , u_2 , \ldots , u_m \right] \) and \( {\bf v} = \left[ v_1 , v_2 , \ldots , v_n \right] , \) denoted \( {\bf u} \otimes {\bf v} , \) is an m-by-n matrix W such that its coordinates satisfy \( w_{i,j} = u_i v_j . \) The outer product \( {\bf u} \otimes {\bf v} , \) is equivalent to a matrix multiplication \( {\bf u} \, {\bf v}^{\ast} , \) (or \( {\bf u} \, {\bf v}^{\mathrm T} , \) if vectors are real) provided that u is represented as a column \( m \times 1 \) vector, and v as a column \( n \times 1 \) vector. Here \( {\bf v}^{\ast} = \overline{{\bf v}^{\mathrm T}} . \)

For three dimensional vectors \( {\bf a} = a_1 \,{\bf i} + a_2 \,{\bf j} + a_3 \,{\bf k} = \left[ a_1 , a_2 , a_3 \right] \) and \( {\bf b} = b_1 \,{\bf i} + b_2 \,{\bf j} + b_3 \,{\bf k} = \left[ b_1 , b_2 , b_3 \right] \) , it is possible to define special multiplication, called cross-product:
\[ {\bf a} \times {\bf b} = \det \left[ \begin{array}{ccc} {\bf i} & {\bf j} & {\bf k} \\ a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \end{array} \right] = {\bf i} \left( a_2 b_3 - b_2 a_3 \right) - {\bf j} \left( a_1 b_3 - b_1 a_3 \right) + {\bf k} \left( a_1 b_2 - a_2 b_1 \right) . \]
Example: For instance, if m = 4 and n = 3, then
\[ {\bf u} \otimes {\bf v} = {\bf u} \, {\bf v}^{\mathrm T} = \begin{bmatrix} u_1 \\ u_2 \\ u_3 \\ u_4 \end{bmatrix} \begin{bmatrix} v_1 & v_2 & v_3 \end{bmatrix} = \begin{bmatrix} u_1 v_1 & u_1 v_2 & u_1 v_3 \\ u_2 v_1 & u_2 v_2 & u_2 v_3 \\ u_3 v_1 & u_3 v_2 & u_3 v_3 \\ u_4 v_1 & u_4 v_2 & u_4 v_3 \end{bmatrix} . \]
In Mathematica, the outer product has a special command:
Outer[Times, {1, 2, 3, 4}, {a, b, c}]
Out[1]= {{a, b, c}, {2 a, 2 b, 2 c}, {3 a, 3 b, 3 c}, {4 a, 4 b, 4 c}}

Applications in Physics

 

  1. What is the angle between the vectors i + j and i + 3j?
  2. What is the area of the quadrilateral with vertices at (1, 1), (4, 2), (3, 7) and (2, 3)?
  1. Aldaz, J. M.; Barza, S.; Fujii, M.; Moslehian, M. S. (2015), "Advances in Operator Cauchy—Schwarz inequalities and their reverses", Annals of Functional Analysis, 6 (3): 275–295, doi:10.15352/afa/06-3-20
  2. Bunyakovsky, Viktor (1859), "Sur quelques inegalités concernant les intégrales aux différences finies" (PDF), Mem. Acad. Sci. St. Petersbourg, 7 (1): 6
  3. Cauchy, A.-L. (1821), "Sur les formules qui résultent de l'emploie du signe et sur > ou <, et sur les moyennes entre plusieurs quantités", Cours d'Analyse, 1er Partie: Analyse Algébrique 1821; OEuvres Ser.2 III 373-377
  4. Deay, T. and Manogue, C.A., he Geometry of the Dot and Cross Products, Journal of Online Mathematics and Its Applications 6.
  5. Gibbs, J.W. and Wilson, E.B., Vector Analysis: A Text-Book for the Use of Students of Mathematics & Physics: Founded Upon the Lectures of J. W. Gibbs, Nabu Press, 2010.
  6. Schwarz, H. A. (1888), "Über ein Flächen kleinsten Flächeninhalts betreffendes Problem der Variationsrechnung" (PDF), Acta Societatis Scientiarum Fennicae, XV: 318, archived (PDF) from the original on 2022-10-09
  7. Solomentsev, E. D. (2001) [1994], "Cauchy inequality", Encyclopedia of Mathematics, EMS Press
  8. Vector addition