Dual Spaces

Double Dual

Annihilators

 

Dual Basis

Let δi,j or δij denote the Kronecker delta:
\[ \delta_{i,j} = \delta^i_j = \begin{cases} 1 , & \quad \mbox{if } i = j , \\ 0, & \quad \mbox{if } i \ne j , \end{cases} \]
Suppose that V is finite-dimensional vector space and let β = { e1, e2, … , en } be a basis of V. For each i = 1, 2, … , n, define a linear functional ej : V ↦ 𝔽 by setting
\begin{equation} \label{EqDual.2} {\bf e}^j \left( {\bf e}_i \right) = < {\bf e}^j \,|\,{\bf e}_i > \, = \delta_{i,j} , \end{equation}
and then extending β* = { e1, e2, … , en } linearly to all of V:
\[ {\bf e}^j \left( a^1 {\bf e}_1 + a^2 {\bf e}_2 + \cdots + a^n {\bf e}_n \right) = a^j , \qquad j=1,2,\ldots , n. \]
It is customary to denote dual bases in V* by the same letters as corresponding bases in V but using lower indices ("downstairs") for V and upper indices ("upstairs") for V*. Then vectors v = viei in V can be represented by objects vi with an upper index, and dual vectors φ = φiei in V* by objects φi with a lower index. It is convenient to use the Einstein summation convention and drop the summation symbol ∑. In this language, the canonical isomorphism between V and V* is realized by lowering and raising indices. Upper index objects vi represent coordinate vectors relative to a basis ei of V and are also called covariant vectors. Lower index objects φi are the coordinates of dual vectors relative to the dual basis ei and are also called contravariant vectors.
Theorem 2: The set β* is a basis of V*. Hence, V* is finite-dimensional and dimV* = dimV.
First we check that the functionals { e1, e2, … , en } are linearly independent. Suppose that there exist 𝔽-scalars k1, k2, … , kn so that
\[ k_1 {\bf e}^1 + k_2 {\bf e}^2 + \cdots + k_n {\bf e}^n = {\bf 0} . \]
Note that the 0 on the right denotes the zero functional; i.e., the functional that sends everything in V to 0 ∈ 𝔽. The equality above is an equality of maps, which should hold for any vV we evaluate either side on. In particular, evaluating both sides on every element from the basis β, we have
\[ \left( k_1 {\bf e}^1 + k_2 {\bf e}^2 + \cdots + k_n {\bf e}^n \right) \left( {\bf e}_i \right) = k_1 {\bf e}^1 \left( {\bf e}_i \right) + k_2 {\bf e}^2 \left( {\bf e}_i \right) + \cdots + k_n {\bf e}^n \left( {\bf e}_i \right) = k_i . \]
on the left (by the definition of the ej) and 0 on the right. Thus, we see that ki = 0 for each i, so the dual basis is linearly independent.

Now we show that the dual basis β* spans V*. Let φ ∈ V*. For each i, let bi denote the scalar φ(ei). We claim that

\[ \varphi = b_1 {\bf e}^1 + b_2 {\bf e}^2 + \cdots + b_n {\bf e}^n . \]
Again, this means that both sides should give the same result when evaluating on any vV. By linearity, it suffices to check that this is true on the basis β. Indeed, for each i, we get
\[ \left( b_1 {\bf e}^1 + b_2 {\bf e}^2 + \cdots + b_n {\bf e}^n \right) \left( {\bf e}_i \right) = b_1 {\bf e}^1 \left( {\bf e}_i \right) + b_2 {\bf e}^2 \left( {\bf e}_i \right) + \cdots + b_n {\bf e}^n \left( {\bf e}_i \right) = b_i = \varphi \left( {\bf e}_i \right) , \]
again by the definition of the ei and the bi. Thus, φ and b1e1 + ⋯ + bnen agree on the basis, so we conclude that they are equal as elements of V*. Hence {e1, e2, … , en} spans V* and therefore forms a basis of V*.
Example 5: In vector space ℝ³, let us consider a (not orthogonal) basis
\[ {\bf e}_1 = \begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix} , \qquad {\bf e}_2 = \begin{bmatrix} \phantom{-}1 \\ -1 \\ \phantom{-}1 \end{bmatrix} , \qquad {\bf e}_3 = \begin{bmatrix} \phantom{-}0 \\ -1 \\ \phantom{-}1 \end{bmatrix} . \]
To find coordinates of the first bra vector e1 [ 𝑎, b, c], we need to solve the system of algebraic equations:
\begin{align*} < a, b, c\,|\,{\bf e}_1 \,> &= a + c = 1 , \\ < a, b, c\,|\,{\bf e}_2 \,> &= a -b + c = 0 , \\ < a, b, c\,|\,{\bf e}_3 \,> &= -b + c = 0 . \end{align*}
Using Mathematica, we get
Solve[{a + c == 1, a - b + c == 0, c - b == 0}, {a, b, c}]
{{a -> 0, b -> 1, c -> 1}}
\[ {\bf e}^1 = \left[ 0, \ 1, \ 1 \right] . \]
Similarly, we find other members of dual basis:
\[ {\bf e}^2 = \left[ 1, \ -1, \ -1 \right] , \qquad {\bf e}^3 = \left[ -1, \ 0, \ 1 \right] . \]

Now suppose that we realize that I made a typo enterning matrices e₁, e₂, e₃. Suppose that in entries of vector e₃ I made one mistake and base vectors must be as follows:

\[ {\bf e}_1 = \begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix} , \qquad {\bf e}_2 = \begin{bmatrix} \phantom{-}1 \\ -1 \\ \phantom{-}1 \end{bmatrix} , \qquad {\bf e}_4 = \begin{bmatrix} \phantom{-}3 \\ -1 \\ \phantom{-}1 \end{bmatrix} . \]
To find coordinates of the first bra vector e1 [ 𝑎, b, c], we need to solve the system of algebraic equations:
\begin{align*} < a, b, c\,|\,{\bf e}_1 \,> &= a + c = 1 , \\ < a, b, c\,|\,{\bf e}_2 \,> &= a -b + c = 0 , \\ < a, b, c\,|\,{\bf e}_4 \,> &= 3a -b + c = 0 . \end{align*}
Using Mathematica, we get
Solve[{a + c == 1, a - b + c == 0, 3*a +c - b == 0}, {a, b, c}]
{{a -> 0, b -> 1, c -> 1}}
\[ {\bf e}^1 = \left[ 0, \ 1, \ 1 \right] . \]
Similarly, we find other members of dual basis:
Solve[{a + c == 0, a - b + c == 1, 3*a +c - b == 0}, {a, b, c}]
{{a -> -(1/2), b -> -1, c -> 1/2}}
Solve[{a + c == 0, a - b + c == 0, 3*a +c - b == 1}, {a, b, c}]
{{a -> 1/2, b -> 0, c -> -(1/2)}}
\[ {\bf e}^2 = \left[ -\frac{1}{2} , -1, \frac{1}{2} \right] , \qquad {\bf e}^4 = \left[ \frac{1}{2} , \,0,\, -\frac{1}{2} \right] . \]
End of Example 5

There does exist a unique correspondence between ordered basis sets in V and its dual V*

\[ \left\{ {\bf e}_i \right\}_{i=1}^n \quad \longleftrightarrow \quad \left\{ {\bf e}^i \right\}_{i=1}^n , \]
but not between individual vectors in V and its dual V*. Moreover, changing only one element of a basis β in V changes several elements in the dual basis β* (see Example 5).
Corollary 1: Let φ ∈ V* be a nonzero functional on V, and let vV be any element such that φ(v) = <φ|v> ≠ 0.Then V is a direct sum of two subspaces:
\[ \begin{split} V &= \mbox{span}({\bf v}) \oplus \mbox{ker}\varphi , \\ V \ni {\bf z}&= {\bf x} + {\bf y} , \qquad {\bf x} = c\,{\bf v}, \quad {\bf y} \in \mbox{ker}\varphi , \qquad c\in \mathbb{F} . \end{split} \]
Let vV be a vector that is not annihilated by linear form φ(v) = <φ|v> ≠ 0. Then we can choose a basis β of V such that β = { e1 = v, e2, … , en }, so that the first vector in this set is v. Let β* = { e1, e2, … , en } be a dual basis in V*. By definition of dual basis, e1(v) = 1, and for all other basis elements, we have e1(ej) = 0, j = 2, 3, … , n. We can prove that kerφ⊂span{e2, … , en} and span{e2, … , en}⊂kerφ, so kerφ=span{e2, … , en}. Therefore, the kernel (or null space) of e1 is n − 1 dimensional.

Hence, the functional φ has the following representation in the dual basis: \[ \phi = \phi ({\bf v})\,{\bf e}^1 = c\, {\bf e}^1 , \qquad c\ne 0. \] This functional maps the rest basis elements { e2, … , en } to zero, so V = span(v) ⊕ kerφ because every vector uV has a unique representation \[ {\bf u} = c^1 \cdot {\bf e}_1 + c^2 \cdot {\bf e}_2 + \cdots + c^n \cdot{\bf e}_n , \] and \[ {\bf v} = {\bf e}_1 = {\bf e}_1 + 0\cdot {\bf e}_2 + \cdots + 0\cdot{\bf e}_n . \]

Example 6: We identify the two dimensional vector space ℂ² of ordered pairs (z₁, z₂) of two complex numbers with its isomorphic image of column vectors:
\[ \mathbb{C}^{2,1} = \left\{ \left[ \begin{array}{c} z_1 \\ z_2 \end{array} \right] : \quad z_1 ,\ z_2 \in \mathbb{C} \right\} . \]
We consider a linear form \[ \varphi \left( z_1 , z_2 \right) = z_1 - 2{\bf j}\, z_2 . \tag{6.1} \] As usual, j denotes a unit imaginary vector in ℂ, so j² = −1. The kernel of φ consists of all pairs of complex numbers (z₁, z₂) for which z₁ = 2jz₂. Hence, the null space of φ is spanned on the vector \[ \mbox{ker}\varphi = \left\{ \left[ \begin{array}{c} 2{\bf j}\,y \\ y \end{array} \right] : \quad y \in \mathbb{C} \right\} . \tag{6.2} \] Let z = (z₁, z₂) be an arbitrary vector from ℂ²,and v = (1+j, 1−j) be given. We have to show that there is a unique decomposition: \[ {\bf z} = \left( z_1 , z_2 \right) = c\left( 1 + {\bf j}, 1 - {\bf j} \right) + \left( 2{\bf j}, 1 \right) y , \tag{6.3} \] where y∈ℂ is some complex number. Equating two components, we obtain two complex equations: \[ \begin{split} z_1 &= c \left( 1 + {\bf j} \right) + 2{\bf j}\, y , \\ z_2 &= c \left( 1 - {\bf j} \right) + y . \end{split} \tag{6.4} \] Since the matrix of system (6.4) is not singular, \[ {\bf A} = \begin{bmatrix} 1 + {\bf j} & 2{\bf j} \\ 1 - {\bf j} & 1 \end{bmatrix} \qquad \Longrightarrow \qquad \det{\bf A} = -1-{\bf j} \ne 0 , \]
1 + I - 2*I*(1 - I)
-1 - I
system (6.4) has a unique solution for any input z = (z₁, z₂): \[ c = \frac{1 + {\bf j}}{2} \left( z_1 - 2{\bf j}\, z_2 \right) , \qquad y = - z_2 - {\bf j}\, z_1 . \]
Solve[{z1 == c*(1 + I) + 2*I*y , z2 == c*(1 - I) + y}, {c, y}]
{{c -> (-(1/2) + I/2) (z1 - 2 I z2), y -> -I (z1 - I z2)}}
End of Example 6

In other words, any vector v not in the kernel of a linear functional generates a supplement of this kernel: The kernel of a nonzero linear functional has codimension 1. When V is finite dimensional, the rank-nullity theorem claims:

\[ \dim\,\mbox{ker}\varphi + \dim\,\mbox{im}\varphi = \dim V \qquad (\varphi \in V^{\ast}) , \]
from which follows for a real vector space that
\[ \varphi \ne 0 \qquad \iff \qquad \mbox{im}\varphi = \mathbb{R}\qquad \iff \qquad \dim\,\mbox{ker}\varphi = \dim V - 1 . \]
If a vector space V is not trivial, so V ≠ {0}, then its dual V* ≠ {0}.

Theorem 3: Let φ, ψ be linear functionals on a vector space V. If φ vanishes on ker(ψ), then φ = cψ is a constant multiple of ψ.
If ψ = 0, there is nothing to prove. Let us assume that ψ ≠ 0, and choose a vector xV such that ψ(x) ≠ 0. Consider the linear functional ψ(x)φ − φ(x)ψ, which vanishes on x and on kerψ by assumption. By Corollary 1, this linear functional vanishes identically: We get a linear dependence relation.

Another proof:
∀ ϕ ∈ V*, ∃ x such that ϕ(x) ≠ 0, according to Corollary 1, V = span(x) ⊕ kerϕ ⇒ ∀ v ∈ V, v = cx + y, y ∈ kerϕ ⇒ ψ(v) = ψ(cx + y) = ψ(cx) + ψ(y) = cψ(x) + 0 = c ψ(x)/ϕ(x) ϕ(x) + ϕ(y) = ψ(x)/ϕ(x) ϕ(cx) +ψ(x)/ϕ(x) ϕ(y) = ψ(x)/ϕ(x) ϕ(cx + y) = ψ(x)/ϕ(x ) ϕ(v).

Example 7: We consider V = ℝ3,3, the set of all real-valued 3×3 matrices, and φ a linear functional on it:
\[ \varphi \left( {\bf M} \right) = \mbox{tr} \left( {\bf M} \right) \quad \iff \quad \varphi \left( \begin{bmatrix} a_{11} & a_{12}& a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{bmatrix} \right) = a_{11} + a_{22} + a_{33} , \]
where tr(M) is the trace of matrix M. The dimension of V is 9. Functional φ annihilates matrix M if and only if its trace is zero:
\[ a_{11} + a_{22} + a_{33} = 0 , \]
which imposes one condition (linear equation) on the coefficients of matrix M. Therefore, the kernel (null space) of φ is 8-dimensional (9 − 1 = 8). The image of φ is just ℝ, that is, one-dimensional vector space.

Trace is the sum of the diagonal elements in a matrix. Here is how Mathematica computes the Trace.

SeedRandom[39820];
mat1 = Array[RandomReal[] &, {3, 3}];
% // MatrixForm
\[ \begin{pmatrix} 0.815023& 0.534769& 0.152693 \\ 0.238861& 0.745171& 0.793097 \\ 0.862482& 0.854809& 0.610953 \end{pmatrix} \]
Diagonal[mat1]
{0.815023, 0.745171, 0.610953}
Total[Diagonal[mat1]]
2.17115
Tr[mat1]
2.17115
End of Example 7
Here is a generalization.
Theorem 4: Let φiV* (1 ≤ im) be a finite set of linear functionals. If a linear functional ψ ∈ V* vanishes on intersection \( \displaystyle \cap_{1 \leqslant i \leqslant m} \mbox{ker}\varphi_i , \) then ψ is a linear combination of the φi, so ψ ∈ span(φ1, … , φm).
Let us proceed by induction on m. For m = 1, it is the statement of Theorem 3. Let now m ≥ 2, and consider the subspace
\[ W = \cap_{1 \leqslant i \leqslant m} \mbox{ker}\varphi_i \supset W \cap \mbox{ker}\varphi_1 , \]
as well as the restrictions of ψ and φ1 to W. By assumption, \( \displaystyle \left. \psi\right\vert_{W} \) vanishes on kerφ1|W. Hence there is a scalar c such that ψ|V = cφ1|V, namely, ψ − cφ1 vanishes on W. By the induction assumption, ψ − cφ1 is a linear combination of φ2, … , φm, say
\[ \psi - c\,\varphi_1 = \sum_{2 \leqslant i \leqslant m} k_i \varphi_i . \]
This gives the required result.

Another way to prove this theorem is to to construct a basis according to the following algorithm.

  1. Let β = { }.
  2. Choose φj from the given finite set of linear functionals.
  3. If φj vanishes on the kernel of any linear functional in β, do nothing and start from 2 again. Else, ∃vjV, such that φj(vj)≠0, and we can scale vj to make φj(vj)=1. We then add vj to β.
Let's suppose that after going through the given finite set of linear functionals, we have β = {v1, … , vr}, then either β is a basis for the vector space V, or we can expand it to a basis for the vector space V, such as β = {v1, … , vr, ur+1, … , un}, and there is φj in our given finite set of linear functionals such that φj(vj) = 1. Further more, it is not hard to prove that ∩1≤j≤mker(φj) = span(ur+1, … , un).

For such basis β of the vector space V, there is corresponding basis β* = {e1, … , er, er+1, … , en} of dual space V*, in which ej = φj. For any φ ∈ V*, \( \displaystyle \varphi = \sum_{i=1}^n c_i {\bf e}^i . \) Since φ vanishes on ∩1≤j≤mker(φj), cj = 0 for j = r+1, … , n. Then we have \[ \varphi = \sum_{j=1}^n c_j {\bf e}^j =\sum_{j=1}^r c_j {\bf e}^j = \sum_{j=1}^r c_j \varphi_j . \]

Example 8: Let ℝ≤n[x] be a set of polynomials in variable x of degree up to n with real coefficients. As we know, the monomials ek(x) = xk (k = 0, 1, 2, … , n)) form a standard basis in this space.

Let us consider the linear functionals:

\[ {\bf e}^j (p) = < {\bf e}^j \, | \,p > = \left. \frac{1}{j!}\,\frac{{\text d}^j}{{\text d} x^j} \, p(x) \right\vert_{x=0} , \qquad j =0,1,2,\ldots , n . \tag{8.1} \]
Since
\[ \texttt{D}^j \, x^k = \frac{{\text d}^j}{{\text d} x^j} \, x^k = \begin{cases} k^{\underline{j}} x^{k-j} = k\left( k-1 \right) \cdots \left( k-j+1 \right) x^{k-j} , & \quad j \le k , \\ 0, & \quad j > k , \end{cases} \]
where \( \displaystyle n^{\underline{j}} = n \left( n-1 \right) \left( n-2 \right) \cdots \left( n-j+1 \right) \) is the j-th falling factorial. As usual, we denote by \( \displaystyle \texttt{D}^0 = \texttt{I} \) the identity operator, and 0! = 1. It is not hard to check that ej form a dual basis for monomials xi. Indeed, substituting p(x) = xk into Eq.(8.1), we get
\[ \lim_{x\to 0} \frac{1}{j!}\,\texttt{D}^j x^k = \lim_{x\to 0} \frac{1}{j!}\,\begin{cases} k^{\underline{j}} x^{k-j} = k\left( k-1 \right) \cdots \left( k-j+1 \right) x^{k-j} , & \quad j \le k , \\ 0, & \quad j > k . \end{cases} \]
If j < k, we have
\[ \lim_{x\to 0} \frac{1}{j!}\,\texttt{D}^j x^k = \lim_{x\to 0} \frac{1}{j!}\,k^{\underline{j}} x^{k-j} = 0 \quad (k - j > 0). \]
Similar, for j > k, we immediately get zero:
\[ \lim_{x\to 0} \frac{1}{j!}\,\texttt{D}^j x^k = \lim_{x\to 0} \frac{1}{j!}\,0 . \]
So we need to check the case j = k. In this case, the falling factorial becomes regular factorial, \( \displaystyle k^{\underline{k}} = k! , \) , so
\[ \lim_{x\to 0} \frac{1}{k!}\,\texttt{D}^k x^k = \lim_{x\to 0} \frac{1}{k!}\,k! = 1 . \]
From Eq.(8.1), it follows that
\[ p(x) = \sum_{k=0}^n \frac{p^{(k)} (0)}{k!}\, x^k . \]
This polynomial expansion is well-known in Calculus as the Maclaurin formula. (The Maclaurin formula is represented as a form of Taylor Series in Mathematica.)

Let us consider the linear form

\[ \psi \left( p(x) \right) = p(0) + p' (0) . \]
This functional vanishes on the intersection of kernels ker(e0) ∩ ker(e1) ⊂ker(ψ). Therefore, ψ = ce0 + c<₁b>e1, for some scalars c₀ and c₁.
End of Example 8

Since a vector space V and its dual V* have the same dimension, hence, they are isomorphic. In general, this isomorphism depends on a choice of bases. However, a canonical isomorphism (known as Riesz representation theorem) between V and its dual V* can be defined if V carries a non-degenerate inner product.

Theorem 5: If v and u are any two distinct vectors of the n-dimensional vector space V, then there exists a linear functional φ on V such that <φ|v> ≠ <φ|u>; or equivalently, to any nonzero vector vV, there corresponds a ψ ∈ V* such that <ψ|v> ≠ 0.
This theorem contains two equivalent statements because we can always consider the difference uv to reduce the first statement to the homogeneous case. So we prove only the latter.

Let β = { e1, e2, … , en } be any basis in vector space V, and let β* = { e1, e2, … , en } be the dual basis in V*. If \( {\bf v} = \sum_i k_i {\bf e}_i , \) then <ej|v> = ki. Hence, if <ψ|v> = 0 for all ψ, in particular, if <ej|v> = 0 for all j = 1, 2, … , n, then v = 0.

Example 9: Let V = ℭ[0, 1] be the set of all continuous functions on the interval [0, 1]. Let f and g be two distinct continuous functions on the interval. Then there exists a point 𝑎 ∈ [0, 1] such that f(𝑎) ≠ g(𝑎). Let us consider a linear functional δ𝑎 that maps these two functions f and g into different real values. Indeed, let
\[ \delta_a (f) = f(a) , \]
Then δ𝑎 is a required functional that distinguishes f and g. As it is widely used in applications, the functional above can be written with the aid of Dirac delta function:
\[ \int_0^1 \delta (x-a)\,f(x)\,{\text d} x = f(a) . \]
End of Example 9
From Theorem 5, it follows that a functional on any finite-dimensional vector space is unique; this means that if φ(v) = 0 for every vV, then φ ≡ 0.
Theorem 6: Let V be an n-dimensional space with a basis β = { e1, e2, … , en }. If { k1, k2, … , kn } is any list of n scalars, then there is one and only one linear functional φ on V such that φ(ei) = < φ | ei > = ki for i = 1, 2, … , n.
For the existence statement, we can just construct this linear transformation: \[ \varphi ({\bf e}_i ) = \langle \varphi \mid {\bf e}_i \rangle = k_i , \qquad i=1,2,\ldots , n , \] where β = {e1, e2, … , en} is a basis in V. Every v in V can be written in the form v = s1e1 + s2e2 + ⋯ + snen in one and only one way because β is a basis. If ψ is any linear functional, then
\[ \psi ({\bf v}) = s_1 <\psi\, |\, {\bf e}_1 > + s_2 <\psi\, |\, {\bf e}_2 > + \cdots + s_n <\psi\, |\, {\bf e}_n > . \]
The uniqueness is followed from the relation above.. If <ψ|ei> = ki, then the value of <ψ|v> is determined for every v from the identity
\[ < \psi \,|\, {\bf v} > = \sum_{i=1}^n s_i k_i . \]
The argument can also be turned around; if we define ψ by
\[ < \psi \,|\, {\bf v} > = s_1 k_1 + s_2 k_2 + \cdots + s_n k_n , \]
then ψ is indeed a linear functional and <ψ|ei> = ki.
Example 10: Let 𝑎1, 𝑎2, … , 𝑎n+1 be distinct points (in ℝ or ℂ), and let ℝ≤n[x] be the space of polynomials of degree at most n in variable x. Define n+1 linearly independent functionals
\[ \varphi_k (p) = \langle \varphi_k \, |\, p \rangle = p \left( a_k \right) , \qquad k=1,2,\ldots , n+1 . \]
Let us define n+1 Lagrange polynomials:
\[ p_k (x) = \prod_{j \ne k} \left( x - a_j \right) \Biggm/ \prod_{j \ne k} \left( a_k - a_j \right) , \tag{10.1} \]
where j in the products runs from 1 to n + 1. The set of Lagrange polynomials β = {p1(x), p2(x), … , pn+1(x)} is a basis in the space ℝ≤n[x]. First, they all are polynomials of degree n, so (3.1) are elements of the vector space ℝ≤n[x]. Second, the Lagrange polynomials are linearly independent. Indeed, suppose that their linear combination is identically zero (for any x) \[ b_1 p_1 (x) + b_2 p_2 (x) + \cdots + b_{n+1} p_{n+1} (x) = 0 , \tag{10.2} \] with some scalars b1, b2, … , bn+1 ∈ ℝ. The relation (3.2) must hold for any x because it must be valid in sense of functions (two functions f and g are equal if and only if f(x) = x(g) for any entry x from the domain---ℝ in our case). We evaluate the relation (3.2) at every point 𝑎j and get \[ b_1 p_1 (a_j ) + b_2 p_2 (a_j ) + \cdots + b_{n+1} p_{n+1} (a_j) = b_j = 0 , \qquad j=1,2,\ldots , n+1 . \tag{10.3} \] Since the numerator of polynomial pk(x) is a product of n simple monomials (x − 𝑎j) that vanish at x = 𝑎j, we have pk(𝑎k) = 1 and pk(𝑎j) = 0 if jk, so every coefficient bj in Eq.(3.2) is zero. This proves that the set of Lagrange polynomials is linearly independent. Since there are exactly n + 1 distinct polynomials, we claim that the Lagrange polynomials (10.1) constitute a basis in the space ℝ≤n[x].

Hence, the system p1(x), p2(x) , … , pn+1(x) is dual to the system φ1, φ2, … , φn+1 in the sense that ⟨φkpj⟩ = δk,j the Kronecker delta.

Now we can construct a polynomial that makes specified values at the given list of points 𝑎1, 𝑎2, … , 𝑎n+1:

\[ p \left( a_k \right) = y_k , \qquad k=1,2,\ldots , n+1; \]
where y1, y2,… , yn+1 is a given list of values (not necessarily distinct). The required polynomial
\[ p \left( x \right) = \sum_{k=1}^{n+1} y_k p_k (x) \]
is well -known in mathematics as the Lagrange interpolation polynomial. Some nice examples of Lagrange interpolation polynomials can be found on Wolfram mathworld page.

Note in the output below, each graphic has a different label for each y axis as the k in Subscript[p, k] increments higher; and the number of points coincide with the value of the k subscript. Accordingly, the product of these approaches the true functional form with added increments.

MakeBoxes[xlabel, TraditionalForm] := "x" MakeBoxes[ylabel[n_], TraditionalForm] := RowBox[{SubscriptBox["P", MakeBoxes[n, TraditionalForm]], "(", "x", ")"}] Lagrange[l_List, max_] := Module[{n = Length[l], i, x}, Plot[Evaluate[InterpolatingPolynomial[l, x]], {x, 0, max}, PlotStyle -> Red, AxesLabel -> TraditionalForm /@ {xlabel, ylabel[n]}, Epilog -> {Blue, PointSize[.03], Point /@ Table[{i, l[[i]]}, {i, n}]}] ] pts = {1, 2, 4, 6, 7}; Show[Graphics[ GraphicsGrid[ Partition[Table[Lagrange[Take[pts, i], 8], {i, 2, Length[pts]}], 2]]], ImageSize -> 500, PlotLabel -> "Lagrange interpolation polynomials"]
Lagrange intepolation polynimials P₂, P₃, P₄, and P₅.
Note a small difference in the choice of points makes a considerable difference in the shape of the function.
pts2 = {1, 2, 4, 5, 6}; Show[Graphics[ GraphicsGrid[ Partition[Table[Lagrange[Take[pts2, i], 8], {i, 2, Length[pts2]}], 2]]], ImageSize -> 500]
Lagrange interpolation polynomials.
End of Example 10

 

  1. Axler, Sheldon Jay (2015). Linear Algebra Done Right (3rd ed.). Springer. ISBN 978-3-319-11079-0.
  2. Halmos, Paul Richard (1974) [1958]. Finite-Dimensional Vector Spaces (2nd ed.). Springer. ISBN 0-387-90093-4.
  3. Katznelson, Yitzhak; Katznelson, Yonatan R. (2008). A (Terse) Introduction to Linear Algebra. American Mathematical Society. ISBN 978-0-8218-4419-9.
  4. Treil, S., Linear Algebra Done Wrong.
  5. Wikipedia, Dual space/