When we speak about duality, it is understood as a duality with respect to what? This section gives an introduction to this important topic and mostly consider basic duality concepts with respect to the field of scalars 𝔽. To some extent, a duality can be viewed as a property to see an object similar to the original picture when it is applied twice, analogous to a mirror image. For instance, in real life, when a person is married, his wife can be considered as a dual image with respect to "marriage operation;" however, her marriage leads to second duality---a husband, who can be the same if it is the first marriage, or it can be another husband due to the second marriage. Another example of duality provides de Morgan's laws.
A special type of duality provides involution operations; when J : V ⇾ V and J² = J ⚬ J = I, the identity operation. So marriage is not always an involution, but complex conjugate is. When an inner product is employed, it forms a special structure in vector space that should be taken into account by duality. We discuss duality of Euclidean spaces in Part V.
Dual Spaces
Recall that the set of all linear transformations from one vectors space V into another vector space W is denoted as ℒ(V, W). Its particular case arises when we choose W = 𝔽 (as a one-dimensional coordinate vector space over itself).Linear functionals can be thought of as giving us snapshots of vectors—knowing the value of φ(v) tells us what v looks like from one particular direction or angle (just like having a photograph tells us what an object looks like from one side), but not necessarily what it looks like as a whole. Alternatively, linear forms can be thought of as the building blocks that make up more general linear transformations. Indeed, every linear transformation into an n-dimensional vector space can be thought of as being made up of n linear forms (one for each of the n output dimensions).
Our first example, which seems a trivial one, clarifies the concept of duality. Let V = ℂ be the set of all complex numbers over the field ℂ (itself). We consider the involution operation
Next, we consider a particular purchase price function, $p. Its values yield the cost of any fruit inventory x:
Let V be ℝ², a Cartesian product of two real lines. We consider a functional T : V ↦ ℝ, defined by
Let ℭ[𝑎, b] be a set of all continuous functions on closed interval [𝑎, b]. For any function f ∈ ℭ[𝑎, b], we define a functional T on it by
You can define another linear functional on ℭ[𝑎, b]:
This functional can be generalized to obtain a sampling function. Let { s1, s2, … , sn } ⊂ [𝑎, b] be a specified collection of points in [𝑎, b], and let { k1, k2, … , kn } be a set of scalars. Then the function
Let us consider the set of all square matrices over some field 𝔽, which we denote by V = 𝔽n,n. Then evaluating a trace of any square matrix is a linear functional on V.
Let us consider a set ℘[t] of all polynomials in variable t of finite degree, which is a vector space over a field of constants 𝔽. Let k1, k2, … , kn be any n scalars and let t1, t2, … , tn be any n real numbers. Then the formula
We observe that for any linear functional φ acting on any vector space V,
A set of all linear functions from vector space V into field 𝔽, ℒ(V, 𝔽), deserves a special label.
The dimension of ℒ(U, V) is the product of dimensions of vector spaces U and V. Since 𝔽 is a vector space of dimension one over itself, dimV✶ = dimV.
When dealing with linear functionals, it is convenient to follow Paul Dirac (1902--1984) and use his bra-ket notation (established in 1939). In quantum mechanics, a vector v is written in abstract ket form as |v>. A linear functional φ is written in bra form as <φ|, it is also frequently called a covector. Then a functional φ acting on vector v is written as
Note: Althogh the Dirac symbol \eqref{EqDual.1} is analogous to the inner product, the vectors inside abra-kets are from different spaces! Later in Part 5 you learn the Riesz representation theorem that establishes isomorphism between two spaces, V* and V′ (see Dual transformations in Part 5). ▣
It is common to consider kets as column vectors, and bras are identified with row vectors. Then you can multiply 1×n matrix <φ| (which is a row vector) with n×1 matrix |v> (which is column vector) to obtain a 1×1 matrix, which is isomorphic to a scalar. Strictly speaking, this operation should be written as <φ|·|v> or, dropping the dot for multiplication, <φ| |v>, but double vertical lines are substituted with a single one. Moreover, Dirac's notation allows us to combine bras, kets, and linear operators (they are matrices in finite dimensional case) together and interpret them using matrix multiplication:
- φ + ψ = ψ + φ for φ, ψ ∈ V✶;
- φ + (ψ + χ) = (ψ + φ) +χ for φ, ψ, χ ∈ V&✶;
- the zero element is the constant zero function;;
- the additive inverse of ψ is −ψ ∈ V✶;
- (ks)ψ = k(sψ) for ψ ∈ V✶ and k, s ∈ 𝔽;
- 1ψ = ψ ∈ V✶;
- k(φ + ψ) = kφ + kψ for φ, ψ ∈ V* and k ∈ 𝔽;
- (k + s)ψ = kψ + kψ for k, s ∈ 𝔽 and ψ ∈ V*.
In general case n > 1, any linear functional on ℝn has the form \[ \mathbb{R}^n \ni {\bf x} = \left( x_1 , x_2 , \ldots , x_n \right) \,\mapsto \,a_1 x_1 + a_2 x_2 + \cdots + a_n x_n \in \mathbb{R} , \tag{3.1} \] for some real numbers 𝑎1, 𝑎2, … , 𝑎n. Since there is exactly n real parameters 𝑎1, 𝑎2, … , 𝑎n that define a linear functional on ℝn, the dual space is isomorphic to ℝn.
Hence we are left to prove that any linear functional on ℝn is represented by formular (3.1). Let φ ∈ ℒ(ℝn, ℝ) be arbitrary linear functional (or linear form). Because of its additivity, \[ \varphi \left( x_1 , x_2 , \ldots , x_n \right) = \varphi \left( x_1 , 0, \ldots , 0 \right) + \varphi \left( 0, x_2 , 0 \ldots , 0 \right) + \cdots + \varphi \left( 0, 0, \ldots , 0, x_n \right) . \] On every component xi ∈ ℝ, linear form φ acts linearly, so there exists a constant 𝑎i such that \[ \varphi \left( 0, 0, \ldots , 0, x_i , 0 , \ldots , 0 \right) = a_i x_i \] because it is actually a functional acting on one-dimensional space ℝ.
So, the dual of ℝn is isomorphic to ℝn itself. The same holds true for ℚn or ℂn, of course, as well as for 𝔽n, where 𝔽 is an arbitrary field. Since the space V over a field 𝔽 (we use only either ℚ or ℝ or ℂ) of dimension n is isomorphic to 𝔽n, and the dual to 𝔽n is isomorphic to 𝔽n, we can conclude that the dual V* is isomorphic to V.
Now let us discuss some convenient isomorphic representations of covectors acting on ℝn. We replace the direct product ℝn = ℝ × ℝ × ⋯ × ℝ by its isomorphic image ℝn×1 of column vectors. So instead of n-tuples we consider column vectors that are matrices of size n × 1. Then a linear transformation T : ℝn×1 ⇾ ℝm×1 is represented by an m × n matrix multiplication. Therefore, it is convenient to identify ℝn with n-dimensional column vector space ℝn×1, then a linear functional on ℝn ≌ ℝn×1 (i.e., a linear transformation φ : ℝn ⇾ ℝ) is given by an 1 × n matrix (a row); we denote it by bra-vector. The collection of all such rows is isomorphic to ℝn×1 (isomorphism is given by taking the transpose of a ket-vector).
Remember that identifying the direct product 𝔽n with column vector space 𝔽n×1 and its dual space with row vector space 𝔽1×n is just a convenient assumption that will remind you matrix multiplication. However, any matrix is a list of lists, so 1×1 matrix is a list containing one scalar, which is not this scalar. From mathematical and computational point of view a 1 × 1 matrix is not the same as a scalar, but it is isomorphic to this scalar. Mathematica distinguishes these two objects:
A definite state of a physical system is represented as a state ket or state vector. However, we can get physical information about the system only upon some measurement. This is achieved by applying a bra vector so we could get physically relevant information about the system by combining bras with kets. In short, the state ket as such gives relevant information about the system only upon measurement of observables, which is accomplished by taking the product of bras and kets.
We consider here only finite-dimensional spaces because for infinite-dimensional spaces the dual space consists not of all but only of the so-called bounded linear functionals. Without giving the precise definition, let us only mention than in the finite-dimensional case (both the domain and the target space are finite-dimensional) all linear transformations are bounded, and we do not need to mention the word bounded (or continuous).
-
Let us consider the set V = ℂ of complex numbers as a real vector space. In this space, addition is defined as usual and multiplication of a complex number z = x + jy by a real number k is defined as kz = kx + jky. Here j is the unit imaginary vector on the complex plane, so j² = −1.
Consider the following functions and your task is to identify which of them is a linear functional on V = ℂ.
- T(z) = T(x + jy) = y;
- T(z) = T(x + jy) = x + y;
- T(z) = T(x + jy) = x²;
- T(z) = T(x + jy) = x − jy;
- T(z) = T(x + jy) = x² + y²;
- \( T(z) = T(x + {\bf j}y ) = \sqrt{x^2 + y^2} . \)
-
Identify which of the following formulas define a linear functional acting on a polynomial x(t).
- \( T(x) = \int_0^2 x(t)\,{\text d} t ; \)
- \( T(x) = \int_0^2 t^3 x(t)\,{\text d} t ; \)
- \( T(x) = \int_0^2 x(t^3 )\,{\text d} t ; \)
- T(x) = x(0);
- \( T(x) =\frac{{\text d} x}{{\text d} t} ; \)
- \( T(x) = \left. \frac{{\text d} x}{{\text d} t} \right\vert_{t=0} . \)
-
Let
v1, v2,… , vn+1
be a system of vectors in 𝔽-vector space V such that there exists a dual system v1, v2,… , vn+1 of linear functionals such that
\[
{\bf v}^j \left( {\bf v}_i \right) = \langle {\bf v}^j \, | \, {\bf v}_i \rangle = \delta_{i,j} .
\]
- Show that the system v1, v2,… , vn+1 is linearly independent.
- Show that if the system v1, v2,… , vn+1 is not generating, then the “biorthogonal” system" v1, v2,… , vn+1 is not unique.
- Define a non-zero linear functional φ on ℚ³ such that if x₁ = (1, 2, 3) and x₂ = (−1, 1, −2), then 〈φ | x₁〉 = 〈φ | x₂〉 = 0.
- The vectors x₁ = (3, 2, 1), x₂ = (−2, 1, −1), and x₃ = (−1, 3, 2) form a basis in ℝ³. If { e¹, e², e³ } is the dual basis, and if x = (1, 2, 3), find 〈e¹ | x〉 and 〈e² | x〉.
- Prove that is φ is a linear functional on an n-dimensional vector space V, then the set of all those vectors for which 〈φ | x〉 = 0 is a subspace of V; what is the dimension of that subspace?
- If φ(x) = x₁ + 2x₂ + 3x₃ whenever x = (x₁, x₂, x₃) ∈ ℝ³, then φ is a linear functional on ℝ³. Find a basis of the subspace consisting of all those vectors x for which 〈φ | x〉 = 0.
- If R and S are subspaces of a vector space V, and if R ⊂ S, prove that S0 ⊂ R0.
- Prove that if S is any subset of a finite-dimensional vector space, then S00 coincides with the subspace spanned on S.
- If R and S are subspaces of a finite dimensional vector space V, then (R ∩ S)0 = R0 + S0 and (R + S)0 = R0 ∩ S0.
- Prove the converse: given any basis β* = { φ1, φ2, … , φn } of V*, we can construct a dual basis { e1, e2, … , en } of V so that the functionals φ1, φ2, … , φn serve as coordinate functions for this basis.
- Consider ℝ³ with basis β = {v₁, v₂, v₃}, where v₁ = (−1, 4, 3), v₂ = (3, 2 −2), v₃ (3, 2 0). Find the dual basis β*.
- Axler, Sheldon Jay (2015). Linear Algebra Done Right (3rd ed.). Springer. ISBN 978-3-319-11079-0.
- Halmos, Paul Richard (1974) [1958]. Finite-Dimensional Vector Spaces (2nd ed.). Springer. ISBN 0-387-90093-4.
- Katznelson, Yitzhak; Katznelson, Yonatan R. (2008). A (Terse) Introduction to Linear Algebra. American Mathematical Society. ISBN 978-0-8218-4419-9.
- Treil, S., Linear Algebra Done Wrong.
- Wikipedia, Dual space/