The Wolfram Mathematica notebook which contains the code that produces all the Mathematica output in this web page may be downloaded at this link.
Caution: This notebook will evaluate, cell-by-cell, sequentially, from top to bottom. However, due to re-use of variable names in later evaluations, once subsequent code is evaluated prior code may not render properly. Returning to and re-evaluating the first Clear[ ] expression above the expression no longer working and evaluating from that point through to the expression solves this problem.
$Post :=
If[MatrixQ[#1],
MatrixForm[#1], #1] & (* outputs matricies in MatrixForm*)
Remove[ "Global`*"] // Quiet (* remove all variables *)
In this section, we consider vector spaces over one of the following two fields: either
ℝ, a set of real numbers or ℂ, a set of complex numbers.
Inner Product
An inner product of two vectors of the same size, usually denoted by \( \left\langle {\bf x} , {\bf y} \right\rangle ,\) is a generalization of the dot product if it satisfies the following properties:
\( \left\langle {\bf v} , {\bf v} \right\rangle \ge 0 , \) and equal if and only if
\( {\bf v} = {\bf 0} . \)
The fourth condition in the list above is known as the positive-definite condition. A vector space together with the inner product is called an inner product space. Every inner product space is a metric space. The metric or norm is given by
when entries are complex.
Here \( \overline{\bf x} = \overline{a + {\bf j}\, b} =
a - {\bf j}\,b = {\bf x}^{\ast} {\bf y} \) is a complex conjugate of a complex number
x = a + jb.
Nonzero vectors u and v of the same size are orthogonal (or perpendicular) when their inner product is zero:
\( \left\langle {\bf u} , {\bf v} \right\rangle = 0 . \) We abbreviate it as \( {\bf u} \perp {\bf v} . \)
If A is an n × npositive definite matrix and u and v are n-vectors, then we can define the weighted Euclidean inner product
In particular, if w1, w2, ... , wn are positive real numbers,
which are called weights, and if u = ( u1, u2, ... , un) and
v = ( v1, v2, ... , vn) are vectors in ℝn, then the formula
defines an inner product on \( \mathbb{R}^n , \) that is called the weighted Euclidean inner product with weights
w1, w2, ... , wn.
Example 4:
The Euclidean inner product and the weighted Euclidean inner product (when \( \left\langle {\bf u} , {\bf v} \right\rangle = \sum_{k=1}^n a_k u_k v_k , \)
for some positive numbers \( a_k , \ (k=1,2,\ldots , n \) ) are special cases of a general class
of inner products on \( \mathbb{R}^n \) called matrix inner product. Let A be an
invertible n-by-n matrix. Then the formula
defines an inner product, which is called the evaluation inner product at \( x_0 , x_1 , \ldots , x_n . \) ■
The invention of Cartesian coordinates in 1649 by René Descartes (Latinized name: Cartesius) revolutionized mathematics by providing the first systematic link between Euclidean geometry and algebra.
An inner product of two vectors of the same size, usually denoted by \( \left\langle {\bf x} , {\bf y} \right\rangle ,\) is a generalization of the dot product if it satisfies the following properties:
\( \left\langle {\bf v} , {\bf v} \right\rangle \ge 0 , \) and equal if and only if
\( {\bf v} = {\bf 0} . \)
The fourth condition in the list above is known as the positive-definite condition. A vector space together with the inner product is called an inner product space. Every inner product space is a metric space. The metric or norm is given by
The nonzero vectors u and v of the same size are orthogonal (or perpendicular) when their inner product is zero:
\( \left\langle {\bf u} , {\bf v} \right\rangle = 0 . \) We abbreviate it as \( {\bf u} \perp {\bf v} . \)
If A is an n × npositive definite matrix and u and v are n-vectors, then we can define the weighted Euclidean inner product
In particular, if w1, w2, ... , wn are positive real numbers,
which are called weights, and if u = ( u1, u2, ... , un) and
v = ( v1, v2, ... , vn) are vectors in ℝn, then the formula
defines an inner product on \( \mathbb{R}^n , \) that is called the weighted Euclidean inner product with weights
w1, w2, ... , wn.
Riesz representation theorem:
Let V be a finite dimensional vector space over the field 𝔽
(𝔽 = ℝ, ℂ) on which 〈·, ·〉 is an inner product. Let φ : V ⇾ 𝔽 be a linear functional on V. Then there exists a unique vector u ∈ V such that φ(v) = 〈u, v〉 for all v ∈ V.
In 1912, the Hungarian mathematician Frigyes Riesz established an isomorphism between an Euclidean space and its dual space. His result (which is also valid for some infinite dimensional spaces) restores equal right between vectors and covectors, but under a new marriage sectificate---known as the inner product, which is our next topic to discuss.
Using the Gram–Schmidt orthogonalization process we can find
an orthonormal basis of V, say, {v1, v2, … , vn}. Now for an arbitrary vector v ∈ V, we have
\[
{\bf v} = \langle {\bf v}_1 , {\bf v} \rangle {\bf v}_1 + \langle {\bf v}_2 , {\bf v} \rangle {\bf v}_2 + \cdots + \langle {\bf v}_n , {\bf v} \rangle {\bf v}_n .
\]
Then it follows that
\[
\varphi ({\bf v}) = \langle {\bf v}_1 , {\bf v} \rangle \varphi ({\bf v}_1 ) + \langle {\bf v}_2 , {\bf v} \rangle \varphi ({\bf v}_2 ) + \cdots + \langle {\bf v}_n , {\bf v} \rangle \varphi ({\bf v}_n ) = \left\langle {\bf v}_1 \varphi^{\ast} ({\bf v}_1 ) + \cdots + {\bf v}_n \varphi^{\ast} ({\bf v}_n ) , {\bf v} \right\rangle
\]
Denoting by
\[
{\bf u} = {\bf v}_1 \varphi^{\ast} ({\bf v}_1 ) + \cdots + {\bf v}_n \varphi^{\ast} ({\bf v}_n ) ,
\]
the result follows.
Suppose there exist two vectors u1 and u2 in the Riesz representation theorem. Then for w = u1 − u2, we have 〈w, v〉 = 0 for all v ∈ V. But then for w ∈ V, it follows that 〈w, w〉 = ∥ w ∥² = 0, which implies that u1 = u2.
to be checked ========================
The following famous theorem (proved independently by Frigyes Riesz and Maurice René Fréchet in 1907) establishes the converse: Any linear functional T(v)
corresponds to the dot product with a weight vector u.
Riesz representation theorem:
Let f be a linear form on n-dimensional vector space V. Then there exists a unique vector u such that for all v ∈ V, f(v) = u • v
An inner product of two vectors of the same size, usually denoted by \( \left\langle {\bf x} , {\bf y} \right\rangle ,\) is a generalization of the dot product if it satisfies the following properties:
\( \left\langle {\bf v} , {\bf v} \right\rangle \ge 0 , \) and equal if and only if
\( {\bf v} = {\bf 0} . \)
The fourth condition in the list above is known as the positive-definite condition.
A vector space together with the inner product is called an inner product space. Every inner product space is a metric space. The metric or norm is given by
The nonzero vectors u and v of the same size are orthogonal (or perpendicular) when their inner product is zero:
\( \left\langle {\bf u} , {\bf v} \right\rangle = 0 . \) We abbreviate it as \( {\bf u} \perp {\bf v} . \)
A generalized length function on a vector space can be imposed in many different ways, not necessarily through the
inner product. What is important that this generalized length, called in mathematics a norm, should satisfy the
following four axioms.
A norm on a vector space V is a nonnegative function
\( \| \, \cdot \, \| \, : \, V \to [0, \infty ) \) that satisfies the following axioms for
any vectors \( {\bf u}, {\bf v} \in V \) and arbitrary scalar k.
With any positive definite (having positive eigenvalues) matrix one can define a corresponding norm.
If A is an \( n \times n \) positive definite matrix and u and v are n-vectors, then we can define the weighted Euclidean inner product
In particular, if w1, w2, ... , wn are positive real numbers,
which are called weights, and if u = ( u1, u2, ... , un) and
v = ( v1, v2, ... , vn) are vectors in
\( \mathbb{R}^n , \) then the formula
defines an inner product on \( \mathbb{R}^n , \) that is called the weighted Euclidean inner product with weights
w1, w2, ... , wn.
Example:
The Euclidean inner product and the weighted Euclidean inner product (when \( \left\langle {\bf u} , {\bf v} \right\rangle = \sum_{k=1}^n a_k u_k v_k , \)
for some positive numbers \( a_k , \ (k=1,2,\ldots , n \) ) are special cases of a general class
of inner products on \( \mathbb{R}^n \) called matrix inner product. Let A be an
invertible n-by-n matrix. Then the formula
In linear algebra, functional analysis, and related areas of mathematics, a norm is a function that assigns a strictly positive length or size to each vector in a vector space—save for the zero vector, which is assigned a length of zero.
On an n-dimensional complex space \( \mathbb{C}^n ,\) the most common norm is
A unit vector u is a vector whose length equals one: \( {\bf u} \cdot {\bf u} =1 . \) We say that two vectors
x and y are perpendicular if their dot product is zero.
There are known many other norms.
The inequality for sums was published by the French mathematician and physicist Augustin-Louis Cauchy (1789--1857) in 1821, while the corresponding
inequality for integrals was first proved by the Russian mathematician Viktor Yakovlevich Bunyakovsky (1804--1889) in 1859. The modern proof
(which is actually a repetition of the Bunyakovsky's one) of the integral inequality was given by the German mathematician Hermann Amandus Schwarz (1843--1921) in 1888.
With Euclidean norm, we can define the dot product as