The Wolfram Mathematic notebook which contains the code that produces all the Mathematica output in this web page may be downloaded at this link.
Caution: This notebook will evaluate, cell-by-cell, sequentially, from top to bottom. However, due to re-use of variable names in later evaluations, once subsequent code is evaluated prior code may not render properly. Returning to and re-evaluating the first Clear[ ] expression above the expression no longer working and evaluating from that point through to the expression solves this problem.
$Post :=
If[MatrixQ[#1],
MatrixForm[#1], #1] & (* outputs matricies in MatrixForm*)
Remove[ "Global`*"] // Quiet (* remove all variables *)
Recall that V is an 𝔽-vector space over field 𝔽 (which is either ℚ or ℝ or ℂ). The set of all linear functionals is denoted by V✶ or V′ (the Riesz representation theorem establishes isomorphism between these two spaces, see Dual transformations in Part 5).
If S is a subset (not need to be a subspace) of a vector space V, then the annihilatorS0 is the set of all linear functionals φ such that φ(v) = <φ|v> = 0 for all v ∈ S. When V is a Euclidean space, then its annihilator is denoted by S⊥.
The term annihilator is quite descriptive, since S0 consists of all linear functionals that annihilate (send to 0) every vector in S. For a subset S of V✶, the annihilator S0 of S is defined as the set
of all v ∈ V such that φ(v) = 0 for all φ ∈ S.
From this definition, we immediately get that {0}0 = V* and V0 = {0}. If V is finite-dimensional and contains a non-zero vector, then Theorem 5 assures us that S0 ≠ V*.
Lemma 2:
Let V be a vector space over a field 𝔽. Then
For any subset S of V, S0 = (span{S})0.
For any subsets S₁ and S₂ of V, if S₁ ⊆ S₂, then S₂0 ⊆ S₁0.
For any subset S of V, S0 is a subspace of V✶ and S ⊆ (S0)0.
Since S ⊆ span(S), we find that (span(S))0 ⊆ S0.
Conversely, suppose that φ ∈ S0, i.e., φ(s) = 0 for all s ∈ S. For any linear combination c1v1 + c2v2 + ⋯ + cnvn from span(S), where c1, c2, … , cn ∈ 𝔽 and v1, v2, … , vn ∈ S, we have
\begin{align*}
\varphi \left( c_1 {\bf v}_1 + c_2 {\bf v}_2 + \cdots + c_n {\bf v}_n \right) &= c_1 \varphi \left( {\bf v}_1 \right) + c_2 \varphi \left( {\bf v}_2 \right) + \cdots + c_n \varphi \left( {\bf v}_n \right)
\\
&= 0
\end{align*}
and hence φ ∈ (span(S))0.
Suppose S₁ ⊆ S₂ and φ ∈ S₂. Then for any v ∈ S₁, φ(v) = 0 and consequently, φ ∈ S₁.
Since for any v ∈ S, 0(v) = 0, we find that 0 ∈ S0, and therefore S0 ≠ ∅. Let φ, ψ ∈ S0 and c, k ∈ 𝔽. Then for every v ∈ S,
\[
\left( c\varphi + k\psi \right) {\bf v} = c\varphi ({\bf v}) + k \psi ({\bf v}) = c0 + k0
\\
= 0 ,
\]
which shows that S0 is subspace of V*. Now let v ∈ S. Then for every linear functional φ ∈ S0, v*(φ) = φ(v) = 0. So v* ∈ (S0)0; since V can be naturally identifies with V**, v ∈ (S0)0.
Example 13:
Let S be a set of two vectors in ℝ³
\[
{\bf v} = \left( 3, 2, 1 \right) , \qquad {\bf u} = \left( 1, -2, 1\right) .
\]
Arbitrary functional has the form:
\[
\varphi = b_1 {\bf e}^1 + b_2 {\bf e}^2 + b_3 {\bf e}^3 ,
\]
where b₁, b₂, b₃ are some real numbers and {e¹, e², e³} is a dual basis. Applying functional φ to vectors v and u, we obtain
\begin{align*}
\varphi ({\bf v}) &= \left( b_1 {\bf e}^1 + b_2 {\bf e}^2 + b_3 {\bf e}^3 \right) \left( 3\,{\bf e}_1 + 2 {\bf e}_2 + {\bf e}_3 \right)
\\
&= 3\,b_1 + 2\,b_2 + b_3 = 0
\end{align*}
and
\begin{align*}
\varphi ({\bf u}) &= \left( b_1 {\bf e}^1 + b_2 {\bf e}^2 + b_3 {\bf e}^3 \right) \left( {\bf e}_1 - 2{\bf e}_2 + {\bf e}_3 \right)
\\
&= b_1 - 2\,b_2 + b_3 = 0 ,
\end{align*}
because
\[
{\bf e}^1 \left( {\bf e}_1 \right) = 1 , \quad {\bf e}^2 \left( {\bf e}_2 \right) = 1 ,\quad {\bf e}^3 \left( {\bf e}_3 \right) = 1 ,
\]
and for other applications we have zeroes: ei(ej) = 0 when i ≠ j. Therefore, φ annihilates vectors v and u if and only if the coefficients bk satisfy the system of equations:
\[
\begin{split}
3\,b_1 + 2\,b_2 + b_3 &= 0 ,
\\
b_1 - 2\,b_2 + b_3 &= 0
\end{split}
\tag{12.1}
\]
Subtraction one equation from another gives
\[
2\,b_1 + 4\, b_2 \qquad \iff \qquad b_1 = -2\,b_2 .
\]
Correspondingly, b₃ = 4b₂. We verify with Mathematica:
then their sum is also an annihilator. Also, if we multiply φ by a constant c, then (cφ)(v) = c·φ(v) = c·0 = 0. So the set of all annihilators for arbitrary set S is a vector space.
Let β = { x1, x2, … , xn } be a basis in V whose first m elements are in S (so they constitute a basis in S as being linearly independent). Let β* = { y1, y2, … , yn } be the dual basis in
V*, We denote by R ⊂ V* the subspace spanned on { ym+1, ym+2, … , yn }. Clearly, R has dimension n−m. We are going to show that R = S0.
If x is any vector in S, then x is a linear combination of the first m elements from the basis β:
In other words, each yj, j = m+1, m+2, … , n, is in S0. It follows that R is in V*,
\[
R \subset S^0 .
\]
Now we prove the opposite relation and assume that y is any element of S0. Then it is a linear combination of vectors from the dual basis β*:
\[
{\bf y} = \sum_{j=1}^n b_j {\bf y}^j .
\]
Since by assumption, y is in S0, we have, for every i = 1, 2, … , m,
\[
0 = \langle {\bf y} \,|\,{\bf x}_i \rangle = \sum_{j=1}^n b_j \langle {\bf y}^j \,|\,{\bf x}_i \rangle = b_i .
\]
In other words, y is a linear combination of ym+1 , … , yn. This proves that y is in R and consequently that
\[
S^0 \subset R,
\]
and the theorem follows.
Example 14:
Let V = ℝ2,2 be the vector space of all 2 × 2 matrices with real entries and let W be the subspace
of V consisting of those matrices A ∈ V for which A B = B A, where \( \displaystyle{\bf B} = \begin{bmatrix} \phantom{-}1 & -2 \\ -2 & \phantom{-}4 \end{bmatrix} . \)
Let \( \displaystyle{\bf A} = \begin{bmatrix} a & b \\ c & d \end{bmatrix} , \) where 𝑎, b, c, and d are some real numbers to be determined. To find their values, we calculate
A B − B A:
\[
{\bf A}\,{\bf B} - {\bf B}\,{\bf A} = \begin{bmatrix} 2c - 2b & 3b + 2d - 2a \\ 2a - 3c - 2d & 2b - 2c
\end{bmatrix} .
\]
B = {{1, -2}, {-2, 4}};
A = {{a, b}, {c, d}};
A.B - B.A
{{-2 b + 2 c, -2 a + 3 b + 2 d}, {2 a - 3 c - 2 d, 2 b - 2 c}}
Upon equating A B − B A to zero, we obtain four equations
\[
\begin{split}
2c - 2b &= 0, \\
3b + 2d - 2a &= 0, \\
2a - 3c - 2d &= 0, \\ 2b -2c &= 0,
\end{split}
\]
two of which are the same. Choosing b as a free variable, we find
\[
a = 0 , \qquad c = b, \qquad d = \frac{3}{2}\, b .
\]
Solve[{-2 a + 3 b + 2 d == 0, a - 3 b - 2 d == 0}, {a, d}]
{{a -> 0, d -> -((3 b)/2)}}
Therefore, matrices from W depend on one real parameter:
\[
{\bf A} = b \begin{bmatrix} 0 & 2 \\ 2 & 3 \end{bmatrix} , \qquad b \in \mathbb{R} .
\tag{14.1}
\]
So dimW = 1 whereas dim V = 4.
In order to determine W0, we use the dual basis {E1, E2, E3, E4} that corresponds to the standard basis of ℝ2,2:
\[
{\bf E}_1 = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} , \quad {\bf E}_2 = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} , \quad
{\bf E}_3 = \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix} , \quad {\bf E}_1 = \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} .
\]
Arbitrary element from W0 can be expanded with respect to the dual basis
\[
W^{0} \ni \varphi = c_1 {\bf E}^1 + c_2 {\bf E}^2 + c_3 {\bf E}^3 + c_4 {\bf E}^4 ,
\tag{14.2}
\]
where coefficients are determined from condition φ(A) = 0. Expansion of A with respect to standard basis is
\[
{\bf A} = 2{\bf E}_2 + 2 {\bf E}_3 + 3 {\bf E}_4 .
\]
Then
\[
W^{0} \ni \varphi ({\bf A}) = \left( c_1 {\bf E}^1 + c_2 {\bf E}^2 + c_3 {\bf E}^3 + c_4 {\bf E}^4 \right) \left( 2{\bf E}_2 + 2 {\bf E}_3 + 3 {\bf E}_4 \right) = {\bf 0} .
\]
Since action of elements from dual basis on standard basis elements are known, we get
\[
W^{0} \ni \varphi ({\bf A}) = 2 c_2 + 2 c_3 + 3 c_4 = 0.
\tag{14.3}
\]
Since there is one condition Eq.(14.3) between four coefficients in Eq.(14.2), we conclude that dimW0 = 3.
■
End of Example 14
Theorem 9:
If S is a subspace of a finite-dimensional vector space V, then S00 = S. In general, span{S} ≌ S⁰⁰.
By definition of S0, 〈φ | x〉 = 0 for all x ∈ S and all φ ∈ S0, it allows that S ⊂ S00, The desired conclusion follows from the dimension argument, Let S be m-dimensional, then dimension of S0 is n−m, and that of S00 is n − (n − m) = m. Hence S = S00.
Example 15:
Let V = ℝ≤n[x] be the space of polynomials of degree at most n in variable x. Let us consider its subset (which is actually, subspace) S that consists of all polynomials without a free term, so S = xℝ≤n-1[x]. Then its annihilator becomes
\[
S^0 = \left\{ \varphi \in V^{\ast} \, : \ \varphi (p) = p(0), \quad p \in V \right\} .
\]
The second annihilator consists of all polynomials p such that φ(p) = p(0) = 0. A polynomial to be zero at the origin should be without a free term. Therefore, S00 = S.
Upon introducing a standard basis in ℝ≤n[x],
\[
{\bf e}_0 =1, \ {\bf e}_1 = x, \ {\bf e}_2 = x^2 , \ \ldots , \ {\bf e}_n = x^n ,
\]
we establish a one-to-one and onto (bijection) transofrmation
f : ℝ≤n[x] ≌ ℝn+1 such that
\[
f \left( a_0 + a_1 x + a_2 x^2 + \cdots + a_n x^n \right) = \left( a_0 , \ a_1 , \ a_2 , \ \ldots , \ a_n \right) .
\]
Then S is congruent to a subspace of ℝn+1:
\[
S \cong \left\{ \left( 0 , x_1, x_2 , \ldots , x_n \right) \ : \ x_k \in \mathbb{R}, \quad k=1,2,\ldots , n \right\} .
\]
Similarly, we can consider the subspace Sn of polynomials of degree at most n−1. Its annihilating functional acts as
\[
\psi \left( a_0 , a_1 , a_2 , \ldots , a_{n-1} , a_n \right) = \left( a_0 , a_1 , a_2 , \ldots , a_{n-1} , 0 \right)
\]
In the space of polynomials ℝ≤n[x], this annihilating functional just drp the last term with xn.
■
End of Example 15
Theorem 10:
Let X and Y be subspaces of a vector space V = X ⊕ Y. Then the dual space X* is isomorphic to
Y⁰ and Y* is isomorphic to
X⁰; moreover, V* = X⁰ ⊕ Y⁰.
Let x ∈ X, φ ∈ X*, and x⁰ ∈ X⁰, respectively. Similarly, let y ∈ Y, ψ ∈ Y*, and y⁰ ∈ Y⁰. The subspaces X⁰ and Y⁰ are disjoint because if κ(x) = κ(y) = 0 for all x and y, then
κ(v) = κ(x + y) = 0 for all v
Moreover, if κ is any functional over V and if v = x + y, we write x⁰(v) = κ(y) and y⁰(v) = κ(x). It is not hard to see that functions x⁰ and y⁰ thus defined are linear functionals on V, so x⁰ and y⁰ belong V*; they also belong to X⁰ and Y⁰, respectively. Since κ = x⁰ + y⁰, it follows that V* is indeed the direct sum of X⁰ and Y⁰.
To establish the asserted isomorphisms, we make a correspondence to every x⁰ a functional ψ from Y* defined by ψ(y) = x⁰(y). It can be shown that the correspondence x⁰ ↦ ψ is linear and one-to-one, and therefore an isomorphism between X⁰ and Y*. The corresponding result for Y⁰ and X* follows from symmetry by interchanging x and y.
Example 16:
Let us find annihilator W0 of the subspace W of ℝ4 spanned by v = (4, −3, 2, −1) and u = (1, 2, −1, 3).
Given vectors v and u are linearly independent because none is a scalar multiple of another. Therefore, W = span(v, u) is two-dimensional vector space.
We build an annihilator W0 using dual basis {e1, e2, e3, e4} to the standard basis e₁ = (1, 0, 0, 0), e₂ = (0, 1, 0, 0), e₃ = (0, 0, 1, 0), e₄ = (0, 0, 0, 1). Then any vector φ from W0 can be expressed as a linear combination of basis vectors:
\[
\varphi = b_1 {\bf e}^1 + b_2 {\bf e}^2 + b_3 {\bf e}^3 + b_4 {\bf e}^4 ,
\]
where scalars b₁, b₂, b₃, b₄ should be chosen so that φ(w) = 0 for any w ∈ W. So
\[
W^{0} \ni \varphi ({\bf w}) = \left( b_1 {\bf e}^1 + b_2 {\bf e}^2 + b_3 {\bf e}^3 + b_4 {\bf e}^4 \right) \left( c_1 {\bf v} + c_2 {\bf u} \right) = 0 ,
\]
where c₁, c₂ ∈ ℝ. Applying φ to basis vectors v and u, we get the system of equations:
\begin{align*}
\varphi ({\bf v}) &= \left( b_1 {\bf e}^1 + b_2 {\bf e}^2 + b_3 {\bf e}^3 + b_4 {\bf e}^4 \right) \left( 4, -3, 2, -1 \right)
\\
&= 4\,b_1 -3\,b_2 + 2\, b_3 - b_1 = 0,
\\
\varphi ({\bf u}) &= \left( b_1 {\bf e}^1 + b_2 {\bf e}^2 + b_3 {\bf e}^3 + b_4 {\bf e}^4 \right) \left( 1, 2, -1, 3 \right)
\\
&= b_1 + 2 b_2 - b_3 + 3\,b_4 = 0 .
\end{align*}
Solving system of linear equations
\[
\begin{split}
4\,b_1 -3\,b_2 + 2\, b_3 - b_1 &= 0,
\\
b_1 + 2 b_2 - b_3 + 3\,b_4 &= 0 ,
\end{split}
\]
we obtain
\[
b_1 = \frac{1}{11} \left( b_3 - 7\, b_4 \right) , \qquad b+2 = \frac{1}{11} \left( 6\,b_3 - 13\,b_4 \right) ,
\]
where b₃, b₄ are free variables. Hence W0 is a two-dimensional vector space.
■
End of Example 16
Theorem 11:
If S and T are subspaces of a vector space V, then
\[
\left( S \cap T \right)^0 = S^0 + T^0 \qquad \mbox{and} \qquad \left( S + T \right)^0 = S^0 \cap T^0 .
\]
It is clear that φ annihilates S + T if and only if φ annihilates both, S and T. Hence, (S + T)⁰ = S⁰ ∩ T⁰. More precisely, since S ⊆ S + T and T ⊆ S + T, by Lemma 2, (S + T)⁰ ⊆ S⁰ + T⁰.
Now, on the other hand, suppose that ϕ ∈ (S⁰ ∩ T⁰). Then ϕ annihilates both S and T. If v ∈ S + T, then v = s + t, where s ∈ S and t ∈ T. Now ϕ(v) = ϕ(s + t) = ϕ(s) + ϕ(t) = 0 + 0 = 0. This shows that ϕ annihilates S + T, i.e., ϕ ∈ (S + T)⁰. Therefore, (S⁰ ∩ T⁰) ⊆ (S + T)⁰ and hence (S⁰ ∩ T⁰) = (S + T)⁰.
Also, if φ = ψ + χ ∈ S⁰ + T⁰, where ψ ∈ S⁰ and χ ∈ T⁰, then ψ, χ ∈ (S ∩ T)⁰, and so φ ∈ (S ∩ T)⁰. Thus,
\[
S^0 + T^0 \subseteq \left( S \cap T \right)^0 .
\]
For the reverse inclusion, suppose that φ ∈ ( S ∩ T)⁰. Write
\[
V = S' \oplus \left( S \cap T \right) \oplus T' \oplus U ,
\]
where S = S' ⊕ (S ∩ T) and T = T' ⊕ (S ∩ T). Define χ ∈ V* by
\[
\left. \chi \right\vert_{S'} = \phi , \qquad \left. \chi \right\vert_{S \cap T} = \left. \phi \right\vert_{S \cap T} = 0 , \qquad \left. \chi \right\vert_{T'} = 0 , \qquad \left. \psi \right\vert_{U} = \phi ,
\]
and define ψ ∈ V* by
\[
\left. \psi \right\vert_{S'} = 0 , \qquad \left. \psi \right\vert_{S \cap T} = \left. \phi \right\vert_{S \cap T} = 0 , \qquad \left. \psi \right\vert_{T'} = \phi , \qquad \left. \psi \right\vert_{U} = 0 .
\]
It follows that &χ ∈ T⁰, ψ ∈ S⁰ and χ + ψ = φ.
We can also derive the second partof this statement from the first part by replacing S by S⁰ and T by T⁰, and using the identity S⁰⁰ = S, we get (S⁰ + S⁰)⁰ = (S⁰)⁰ ∩ (T⁰)⁰ = S ∩ T.
Example 17:
Let S be a two-dimensional space spanned on vectors i = (1, 0, 0) and j = (0, 1, 0) while T be a two-dimensional space spanned on vectors j and k = (0, 0, 1). Their intersection S ∩ T is a one-dimensional space spanned on vector j. The annihilator of the intersection, (S ∩ T)⁰ contains all vectors of the form
\[
\left( S \cap T \right)^0 = \left\{ \left( x, 0, z \right) \, : \ x, z \in \mathbb{R} \right\} .
\]
On the other hand, annihilators of sets S and T are
\begin{align*}
S^0 &= \left\{ \left( 0, 0, z \right) \ : \ z \in \mathbb{R} \right\} ,
\\
T^0 &= \left\{ \left( x, 0, 0 \right) \ : \ x \in \mathbb{R} \right\} .
\end{align*}
Then their sum becomes
\[
S^0 + T^0 = \left\{ \left( x, 0, z \right) \ : \ x,z \in \mathbb{R} \right\} .
\]
■
End of Example 17
Observe that no dimension argument is employed in the proof of the first identity (S + T)⁰ = S⁰ ∩ T⁰,
hence it holds for vector spaces of finite or infinite dimensions.
Hyperplanes/Hyperspaces
When a vector space is a Cartesian product 𝔽n = 𝔽 × 𝔽 × ⋯ × 𝔽 of a finite number of copies of field 𝔽, we can give a "geometrical" interpretation to annihilators. A crucial step is to consider the bilinear functional, known as a dot-product:
which assigns a number for a pair of two vectors a = (𝑎₁, 𝑎₂, … , 𝑎n) and x = (x₁, x₂, … , xn) from 𝔽n.
Although dot products and inner products are the topic of Part 5 (Euclidean Spaces), we will not use its properties over here, but only a convenient notation (as dot, "•"). Dot-product provides an example of a linear form over vector space 𝔽n, which is also denoted with bra-ket notation, called Dirac notation: 〈a | x〉 = a • x. Actually, every linear functional over 𝔽n is generated by a dot product for some bra-vector a ≠ 0 (see Theorem 6 in section on dual basis).
With dot-product, we can make the following (geometric) definition.
Two vectors a and x from 𝔽n are called orthogonal if their dot-ptoduct is zero: a • x = 0. The set of all vectors that are orthogonal (or perpendicular) to a ≠ 0 is a vector space, denoted by
\[
{\bf a}^{\perp} = \left\{ {\bf x} \ : \ {\bf a} \bullet {\bf x} = 0 \right\} .
\]
A linear equation a • x = b is a constraint on our choice of a point in 𝔽n. With such
constraint, we expect to lose a degree of freedom on the solution set for the linear equation. When choosing a point x = (x₁, x₂, … , xn) in 𝔽n, we have n degrees of freedom---the number of coordinates. By satisfying a single linear equation, we expect to have n − 1 degrees of freedom.
A hyperplane in 𝔽n for n ≥ 2 is the solution set of an n-variable linear equation
a • x = b (or 〈a | x〉 = b) with a ≠ 0. The homogeneous linear equation
𝑎₁x₁ + 𝑎₂x₂ + ⋯ + 𝑎nxn = 0 defines a hyperspace in 𝔽n, which is (n − 1)-dimensional subspace of 𝔽n.
In case of general finite dimensional vector space V, a maximal proper subspace of V is called a hyperspace of V. A hyperplane is a coset of a hyperspace.
Actually, every hyperspace is the solution set of homogeneous linear equation
where the coefficient vector a = (𝑎₁, 𝑎₂, … , 𝑎n) ∈ 𝔽n is given (or known in advance) and x = (x₁, x₂, … , xn) is a vector of unknowns. Eq.\eqref{EqHyp.1} has the general solution, which fills an n − 1vector subspace of 𝔽n. When field 𝔽 is real, the linear combination in the left-hand side of Eq.\eqref{EqHyp.1} is a dot product
Lemma 3:
The solution to one linear inhomogeneous equation over 𝔽 in n unknowns
\begin{equation} \label{EqHyp.3}
{\bf a} \bullet {\bf x} = a_1 x_2 + a_2 x_2 + \cdots + a_n x_n = b \in \mathbb{F} .
\end{equation}
is a hyperplane in 𝔽n.
Let a = [𝑎₁, [𝑎₂, … , 𝑎n] be the coefficient matrix for a single linear equation in n unknowns
over 𝔽. Notice that a is a nonzero row vector in 𝔽n. Since a is nonzero and the corresponding
maps Ta : 𝔽n ⇾ 𝔽, the Rank Theorem (rank(A) + nullity(A) = n) for matrices ensures that
\[
mbox{rank}({\bf a}) = \dim \mathbb{F} = 1.
\]
It follows that the nullity of a is n − 1, thus, that any coset of NullSpace(a) is a hyperplane in 𝔽n.
Example 18:
Consider the solution to x + 2 y − 3 z = 4 in ℝ³. We can take coordinates y = r
and z = s arbitrary in ℝ, which forces x = 4 + 3 z − 2 y = 4 + 3 s − 2 r. The solution set is the affine
plane in ℝ³ given by
\[
\left\{ (4 + 3\,s -2\,r)\ : \ r,s \in \mathbb{R} \right\} = \left\{ (4,0,0) + r\,(0,-2,0) + s\,(0,0,3) \ : \ r,s \in \mathbb{R}\right\} .
\]
Roger
■
End of Example 18
If vector x = (x₁, x₂, … , xn) satisfies the equation a • x = 0, it is orthogonal (or perpendicular) to vector a = (𝑎₁, 𝑎₂, … , 𝑎n) (by definition, a ⊥ x iff a • x = 0), and therefore, it is perpendicular to all scalar multiples of a. So every solution of homogeneous linear equation a • x = 0 is orthogonal to the span(a). We can reformulate the general solution set of homogeneous equation \eqref{EqHyp.1} as
Every nonzero vector from span{a} is a normal vector to the hyperspace a • x = 0. Therefore, solving the linear homogeneous equation a • x = 0 corresponds to finding all vectors x
orthogonal to a in 𝔽n.
Theorem 12:
If ϕ is a nonzero linear functional on a vector space V, then the null space of ϕ is a hyperspace of V. Conversely, every hyperspace of V is the null space of a (not unique) nonzero linear functional on V.
Let φ be a nonzero linear functional on the vector space V and W the null space of φ. We have to show that W is a hyperspace of V. It is obvious that V ≠ W. Also
W ≠ {0} as dimV > 1 and V is finite-dimensional. This shows that W is a proper subspace of V.
According to the rank nullity theorem. the dimension of the domain of a linear transformation φ is the sum of the rank of φ (the dimension of the image of φ) and the nullity of φ (the dimension of the kernel of φ). Since the dimension of the image of φ is 1 (it is either ℝ or ℂ---both are one dimensional spaces), we conclude that the kernel (null space) of φ has dimension n−1, so it is hyperspace.
Conversely, suppose that W is a hyperspace of V. Then we have to construct a nonzero linear functional ψ on V such that null space of ψ is W, as {0} ≠ W ≠ V. There exists a basis u1, u2, … , un-1 ∈ W because the dimension of a hyperspace is 1 less than dimension of V. Let 0 ≠ u be a vector in V that does not belong to W. If we set ψ(u) = 1, but ψ annihilates every element from the basis uj (j = 1, 2, … , n−1), then ψ becomes the required linear form.
Example 19:
Let ϕ be a linear functional on ℝ² defined by
\[
\phi ({\bf x}) = \phi (x, y) = 2\,x+y , \qquad x, y \in \mathbb{R} .
\]
The corresponding homogeneous relation
\[
\phi (x, y) = 0 \qquad \iff \qquad 2\,x+y = 0
\tag{19.1}
\]
defines a line in ℝ². Rewriting Eq.(19.1) as a dot-product
\[
{\bf a} \bullet {\bf x} = 0 \qquad \iff \qquad 2\,x_1 + x_2 = 0
\]
for a = (2, 1) and x = (x₁, x₂), we see that vector a is perpendicular to the line (18.1).
line = Graphics[{Black, Thick, Line[{{-1, -0.5}, {1, 0.5}}]}];
n = Graphics[{Blue, Thickness[0.01], Arrowheads[0.1],
Arrow[{{0, 0}, {-0.5, 1}}]}];
hor = Graphics[{Dashed, Line[{{-1, 0}, {1, 0}}]}];
ver = Graphics[{Dashed, Line[{{0, -0.5}, {0, 1}}]}];
txt1 = Graphics[
Text[Style["2 x + y = 0", 25, Italic, Black], {0.5, 0.5}]];
txt2 = Graphics[Text[Style["n", 25, Bold, Black], {-0.25, 0.95}]];
Show[hor, ver, n, line, txt1, txt2]
This is a wonderful benefit of numeric/geometric duality: if we have
a homogeneous linear equation in two variables, we can visualize its
solution as a line.
We can do the same job algebraically too. If we solve 2 x + y = 0 by row-
reducing the one-rowed matrix [2 1] ∼ [1, 1/2], we get one free column, and
hence one homogeneous generator h = [−1, 2]. Our solution mapping consequently takes the form
\[
H(t) = t\,{\bf h} = t\left[ -2, \ 1 \right] .
\]
Now consider an inhomogeneous two-variable equation
\[
{\bf a} \bullet {\bf x} = b \qquad (b \ne 0) .
\tag{18.2}
\]
Solve this by row-reducing the one-rowed augmented matrix
\[
\left[ 2 \ 1 \vert \ b \right] .
\]
In our example, a = [2, 1] ≠ (0, 0), so we must get a pivot when we row-reduce. That
leaves one free column, producing one homogeneous generator h that we found previously.
The general solution of the inhomogeneous equation a • x = b therefore takes the form
\[
{\bf x}(t) = {\bf x}_p + {\bf x}_h = {\bf x}_p + t\,{\bf h} ,
\]
where xp is a particular solution of the nonhomogeneous equation (18.2) and xh is the general solution of the corresponding homogeneous equation. Here h = [−1, 2] is the homogeneous generator for the sequation (18.1).
Then the corresponding dot product of these two vectors a (wriiten in row form) and x (written in column form) is the entry of 1 × 1 matrix, which you need to extract.