Dual Spaces

Dual basis

Annihilators

 

The Dual of a Dual Space

In several areas of mathematics, isomorphism appears as a very general concept. The word derives from the Greek "iso", meaning “equal”, and "morphosis", meaning “to form” or “to shape.” Informally, an isomorphism is a map that preserves sets and relations among elements. When this map or this correspondence is established with no choices involved, it is called canonical isomorphism.

The construction of a dual may be iterated. In other words, we form a dual space \( \displaystyle \left( V^{\ast} \right)^{\ast} \) of the dual space V*; for simplicity of notation, we shall denote it by V**.

We prefer to speak of a function f (not mentioning its variable), keeping f(x) for the image of x under this map. This approach is very convenient when dealing with linear functionals. For example, Dirac's delta function

\[ \delta_a \,:\ f \,\mapsto f(a) \]
is a linear functional on a set of test functions. The second dual V** is canonically (i.e., in a natural way) isomorphic to V. This means that any vector vV canonically defines a linear functional δv on V* (i.e. an element of the second dual V**) by the rule:
\begin{equation} \label{EqDual.3} \delta_{\bf v} (\varphi ) = \varphi ({\bf v}) , \qquad \forall \varphi \in V^{\ast} , \qquad {\bf v} \in V . \end{equation}
It is easy to check that the mapping T : VV**, Tv = δv is a linear transformation. Indeed, T(v + u) = δv+u leads to
\[ \delta_{{\bf v} + {\bf u}} \left( \varphi \right) = \varphi \left( {\bf v} + {\bf u} \right) = \varphi \left( {\bf v} \right) + \varphi \left( {\bf u} \right) . \]
Similarly, for a scalar km we have T(kv) = δkv and for any φ∈V*,
\[ \delta_{k{\bf v}} \left( \varphi \right) = \varphi \left( k{\bf v}\right) = k\, \varphi \left( {\bf v}\right) . \]

Note, that the null space of T is ker(T) = {0} due to uniqueness of linear functional. Finally, letting the element v vary in 𝔽-vector space V, we obtain a linear map

\begin{equation} \label{EqDual.4} \delta \,:\,V \mapsto V^{\ast\ast} , \qquad {\bf v} \mapsto \delta_{\bf v} . \end{equation}
Theorem 7: The linear map δ : VV**, defined by v ↦ δv (φ ↦ φ(v), is injective function. When V is finite dimensional, it is a canonical isomorphism.
The meaning of δv = 0 is φ(v) = δv(φ) = 0 for all linear functionals φ. It implies that v = 0 due to Lemma 1. Hence, kerδ = {0}, so the linear map \eqref{EqDual.4} is injective. In the finite-dimensional case, we have
\[ \dim\,V = \dim\,V^{\ast} = \dim\,V^{\ast\ast} \]
and δ is automatically surjective.
Example 11: The "dual" of a vector is the linear transformation that it encodes onto the real or complex number. Another way of expressing the same concept is: the "dual" of a linear transformation from space to one dimension (ℂ or ℝ) is a certain vector in that space. A non-mathematician might, as a result of this, ascribe somewhat magical properties to the dot product because that is just what the dot product does: locate an answer to a vector question on the real number line or complex plane. Mathematicians and physicists have long studied the notion of Duality and have given various interpretations of it. One of these is that when two different mathematical abstractions share [some] properties, they are endowed with a correspondence that is very useful in practice. More complex transformations, such as Fourier, permit one to make theoretical progress on otherwise intractable problems in the transform space, achieving a result otherwise not possible and then transform that result back to the original for a practical answer that was out of reach in the pre-transformed expression.

The figure below is a simple example of duality and how one may move easily from one to the other and back. Notice the commonality of six and eight in each figure. Each object has a set of 6 somethings and a set of eight somethings, but they are not the same things. For the cube it is six faces and eight vertices. Count the equivalent for the octahedron. Note the each have the same number of edges. There is a mathematical relation between the cube and the octahedron which permits one to "convert" one shape to the other; reversing that process returns to the original shape. Naturally, there is a scale factor involved, explaining why the outside cube is much larger than the one inside. The mathematical relationship is described above where the center of the faces of one are linked, thereby creating the other.

cube = Graphics3D[Cube[], PlotLabel -> "Cube"];
octahedron = Graphics[] together = Graphics[]
PolyhedronData[{"Dipyramid", 4}]
dualPolyhedron[M_] := RegionBoundary[ ConvexHullMesh[RegionCentroid /@ MeshPrimitives[M, {2}]]];
extractEdges[M_, r_] := MeshPrimitives[M, {1}] /. Line[x___] :> Tube[x, r];
M0 = RegionBoundary[PolyhedronData["Octahedron", "MeshRegion"]];
M1 = dualPolyhedron[M0];
M2 = dualPolyhedron[M1];
M3 = dualPolyhedron[M2];
r = 0.015/2;
Graphics3D[{Gray, Specularity[White, 30],(*extractEdges[M0,r],*) extractEdges[M1, r], Blue, Specularity[White, 30], extractEdges[M2, r], Red, Specularity[White, 30], extractEdges[M3, r]}, Lighting -> "Neutral", Boxed -> False, ViewPoint -> {3.143191699236158, 1.2515098689203545`, -0.06378863415898628}, ViewVertical -> {0.5617332461853343, 0.22780339858369236`, 0.7953372691656077}]

Cube
     
Octahedron
     
Duality with cube and octahedron

cube = Graphics3D[Cube[], PlotLabel -> "Cube"]; octahedron = Show[PolyhedronData[{"Dipyramid", 4}], PlotLabel -> "Octahedron"]; dualPolyhedron[M_] := RegionBoundary[ ConvexHullMesh[RegionCentroid /@ MeshPrimitives[M, {2}]]]; extractEdges[M_, r_] := MeshPrimitives[M, {1}] /. Line[x___] :> Tube[x, r]; M0 = RegionBoundary[PolyhedronData["Octahedron", "MeshRegion"]]; M1 = dualPolyhedron[M0]; M2 = dualPolyhedron[M1]; M3 = dualPolyhedron[M2]; r = 0.015/2; all3 = Graphics3D[{Gray, Specularity[White, 30], extractEdges[M1, r], Blue, Specularity[White, 30], extractEdges[M2, r], Red, Specularity[White, 30], extractEdges[M3, r]}, Lighting -> "Neutral", Boxed -> False, ViewPoint -> {3.143191699236158, 1.2515098689203545`, -0.06378863415898628}, ViewVertical -> {0.5617332461853343, 0.22780339858369236`, 0.7953372691656077}, PlotLabel -> "Duality of cube and octahedron"]; GraphicsRow[{cube, octahedron, all3}]
The preceding geometric duality construction can be made for all Platonic solids. Here is a list of these regular polyhedra.

  Vertices (v) Edges (e) Faces (f) f - e + v
tetrahedron 4 6 4 2
cube 8 12 6 2
octahedron 6 12 8 2
dodecahedron 20 30 12 2
icosahedron 12 30 20 2
Grid[{{"", "Vertices (v)", "Edges (e)", "Faces (f)", "f-e+v"}, {"tetrahedron", "4", "6", "4", "2"}, {"cube", "8", "12", "6", "2"}, {"octahedron", "6", "12", "8", "2"}, {"dodecahedron", "20", "30", "12", "2"}, {"icosahedron", "12", "30", "20", "2"}}, Frame -> All]

The exchange of the numbers of faces (f) and vertices (v)

\[ f \quad \longleftrightarrow \quad v \]
(leaving the number e of edges invariant) produces a symmetry
\[ \begin{array}{lcr} \mbox{tetrahedron} & \quad \longleftrightarrow \quad & \mbox{tetrahedron} \\ \mbox{cube} & \quad \longleftrightarrow \quad & \mbox{octahedron} \\ \mbox{icosahedron} & \quad \longleftrightarrow \quad & \mbox{dodecahedron} \end{array} \]
End of Example 11

Since dimV** = dimV* = dimV, the condition ker(T) = {0} implies that T is an invertible transformation (isomorphism). The isomorphism T is very natural because it was defined without using a basis, so it does not depend on the choice of basis. So, informally we say that V** is canonically isomorphic to V; the rigorous statement is that the map T described above (which we consider to be a natural and canonical) is an isomorphism from V to V**.

When we defined the dual space V* from V, we did so by picking a special basis (the dual basis), therefore, the isomorphism from V to V* is not canonical because it depends on the basis. Since the dual space is intuitively the space of “rulers” (or measurement-instruments) of our vector space, and as so there is no canonical relationship between rulers and vectors because measurements need scaling, and there is no canonical way to provide one scaling for space.

On the other hand, the isomorphism \eqref{EqDual.4} between the initial vector space V and its double dual V** is canonical since mapping \eqref{EqDual.4} does not depend on local basis. If we were to calibrate the measure-instruments, a canonical way to accomplish it is to calibrate them by acting on what they are supposed to measure.

Example 12: A typical linear programming problem that was first formulated by the Russian scientist Leonid Kantorovich in 1939, asks to minimize the cost function cx subject to constraints:
\[ \begin{split} {\bf x} = ( x_1 , x_2 , \ldots , x_n ) \geqslant 0 \qquad \iff \qquad x_i \geqslant 0 , \\ {\bf A}\,{\bf x} \geqslant {\bf b} , \end{split} \tag{12.1} \]
where A is an m × n matrix with real coefficients and b ∈ ℝm×1 ≌ ℝm. As usual, the cost function cx = c1x1 + ⋯ + cnxn is a dot product of two vectors. Any vector satisfying the constraint conditions is called feasible.

There are known two well-known methods to solve this optimization problem: the simplex _algorithm (invented by George Dantzig during the second world war) and Karmarkar’s Method (1984).

Theorem A: If the feasible region is unbounded and the coefficients of the objective function are positive, then the minimum value of the objective function exists. Moreover, the minimum value of the objective function occurs at one (or more) of the corner points at the boundary of the feasible region.
The dual problem starts from the same A, b, and c, and reverses everything. In the primary problem, c is in the cost function and b is in the constraint, In the dual, b and c are switched, The dual unknown y ∈ 𝔽1, m is a row vector with m components, and the feasible set has y Ac instead of A xb. In short, the dual of a minimum problem is a maximum problem. Now y ≥ 0:
\[ {\bf y}\,{\bf A} \leqslant {\bf c} \qquad \iff \qquad {\bf A}^{\mathrm T} {\bf y}^{\mathrm T} \leqslant {\bf c} . \tag{12.2} \]
The dual of this problem is the original minimum problem. A set of all solutions y ∈ ℝ1×m ≌ ℝm satisfying (12.2) is called a polyhedron. There is a symmetry between the primal and dual problems. The simplex method applies equally well to a maximization—anyway, both problems get solved at once.

Theorem B: When both optimization problems have feasible vectors, they have optimal solutions x* and y*, respectively. The minimum cost cx* equals the maximum income y* • b.

If optimal vectors do not exist, there are two possibilities: Either both feasible sets are empty, or one is empty and the other problem is unbounded (the maximum is +∞ or the minimum is −∞).

Application to Economics and Finance


The central question of economics is how to allocate scarce resources amongst alternative ends. This turns into a series of sub-issues:

  1. How much of what to apply and when?
  2. How does the application of any particular factor/resource change the output in both absolute and relative terms?
  3. What is the best combination of resources that produces the most efficient result?
The analytical technology required to answer these questions involves a process known as "Optimization" which, in practical terms, becomes finding the maximum or a minium of a function. This can require intertwined functions. For instance, the optimization of net income (revenue minus cost) involves the simultaneous maximization of revenue and minimization of cost.

Life involves trade-offs. Constraints force trade-offs into the picture. You can't go to the movie and the ballgame on the same night. One must make choices. A rational life means making choices which optimize your time/happiness/profits/career/retirement savings. The ideas of maximization and minimization become more interesting in the presence of constraints. Mathematica has very highly evolved algorithms for solving these kinds of problems.

The Primal Problem


It all begins with stating the goal as an "objective function." This is the mathematical expression of what you wish to achieve. So here we have some x and y (inputs, products, choices) that must be combined in a certain way to get us where we want to be.

In order to illustrate the concepts, we consider a simple primal minimization problem:

minimize a cost function \( \displaystyle \qquad f(x_1 , x_2 ) = 120\,x_1 + 100\, x_2 \qquad \iff \qquad f({\bf x}) = {\bf c} \bullet {\bf x} , \quad \) where c = (120, 100) and x = (x₁, x₂),

subject to constraints:

\[ \begin{split} g_1 (x_1 , x_2 ) &= 2\, x_1 + 2\,x_2 \geqslant 8 , \\ g_2 (x_1 , x_2 ) &= 5\,x_1 + 3\,x_2 \geqslant 15, \\ g_3 (x_1 , x_2 ) &= 3\,x_1 + 8\,x_2 \geqslant 17. \end{split} \]
It is convenient to rewrite these constraints in vector form:
\[ {\bf A}\,{\bf x} \geqslant {\bf b} , \qquad {\bf A} = \begin{bmatrix} 2 & 2 \\ 5 & 3 \\ 3 & 8 \end{bmatrix}, \quad {\bf b} = \begin{pmatrix} 8 \\ 15 \\ 17 \end{pmatrix} , \quad {\bf x} = \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} . \]
(*Objective function*)
f1[x1_, x2_] := 120 x1 + 100 x2;
(*Constraints*)
g1[x1_, x2_] := 2 x1 + 2 x2 >= 8
g2[x1_, x2_] := 5 x1 + 3 x2 >= 15
g3[x1_, x2_] := 3 x1 + 8 x2 >= 17
A = {{2, 2}, {5, 3}, {3, 8}};
MatrixForm[A]
\( \displaystyle \begin{pmatrix} 2 & 2 \\ 5 & 3 \\ 3 & 8 \end{pmatrix} \)
b = {8, 15, 17};
MatrixForm[b]
\( \displaystyle \begin{pmatrix} 8 \\ 15 \\ 17 \end{pmatrix} \)
x = {x1, x2}; MatrixForm[x]
\( \displaystyle \begin{pmatrix} x1 \\ x2 \end{pmatrix} \)

First, we find the solution of the formulated primal minimization problem geometrically, and then solve it analytically. So we plot the constraint functions

Figure 12.1: Constraint lines.
     
cntrPlt1 = ContourPlot[{2 x + 2 y == 8, 5 x + 3 y == 15, 3*x + 8*y == 17}, {x, 0, 6}, {y, 0, 5}, ContourStyle -> Thickness[0.01]];
LLabeled[%, "Figure 12.1: Constraint Lines"]abeled[%, "Figure 12.1: Constraint Lines"]

Then we cut off parts of these lines that do not define the feasible region.

Figure 12.2: Active Constraint lines.
     
pt1 = Solve[{2 x1 + 2 x2 == 8, 5 x1 + 3 x2 == 15}, {x1, x2}][[1, All, 2]]; pt2 = Solve[{2 x1 + 2 x2 == 8, 3 x1 + 8 x2 == 17}, {x1, x2}][[1, All, 2]]; pt3 = {Solve[3 x1 + 8 x2 == 17 /. x2 -> 0, x1][[1, 1, 2]], 0}; pt4 = {0, 5}; pt5 = {5, .25}; pts1 = Show[ Graphics[{Red, PointSize -> Medium, Point[#]}] & /@ {pt1, pt2, pt3, pt4}]; arr1 = Graphics[{Arrow[{{3.5, 2.5}, {1, 3.5}}], Arrow[{{3.5, 2.5}, {2.5, 1.55}}], Arrow[{{3.5, 2.5}, {4, .75}}], , Arrow[{{.95, .75}, {1.5, 1.5}}], Arrow[{{.95, .75}, {1, 2.9}}], Arrow[{{.95, .75}, {2.3, 1}}], Arrow[{{.95, .75}, {3.4, .5}}], Text["Inactive\nconstraints", {.95, .5}], Text["Active\nconstraints", {3.95, 2.5}]}]; sh1 = Labeled[Show[cntrPlt1, pts1, arr1], "Figure 12.2: Active Constraint Lines"]

This allows us to plot the feasible region that displays the area bounded by all of the constraints. Also shown are four red points that represent so-called "corner solutions" where optimal values may be found. Note below that we have added two more constraints. Both x₁ and x₂ must be non-negative. This example is considered "stylized" in that is simplifies a context for teaching purposes.

Figure 12.3: Feasible region.
     
regPlt1 = RegionPlot[{2 x1 + 2 x2 >= 8 && 5 x1 + 3 x2 >= 15 && 3*x1 + 8*x2 >= 17 && x1 >= 0 && x2 >= 0}, {x1, 0, 6}, {x2, 0, 5}]; Labeled[Show[%, pts1], "Figure 12.3: Feasible Region"]

The Simplex Method (Danzig, 1946) finds the optimal solution, which must be at one of the corners of the feasible region. Three corners of the feasible region are shown below as red points. Sections of the lines that go through two points are known as "active constraints" as they are boundaries of the feasible region. Portions of the lines which are not boundaries of the feasible region are considered "inactive constraints" as values outside the feasible region cannot result in an optimal solution.

Mathematica has several ways to find the minimum of this function, here are two:

Objective function*)
f1[x_, y_] := 120 x + 100 y;
g1[x1_, x2_] := 2 x1 + 2 x2 >= 8;
g2[x1_, x2_] := 5 x1 + 3 x2 >= 15;
g3[x1_, x2_] = 3*x1 + 8*x2 >= 17;
FindMinimum[{f1[x1, x2], g1[x1, x2], g2[x1, x2], g3[x1,x2], x1 >= 0, x2 >= 0}, {x1, x2}]
{430., {x1 -> 1.5, x2 -> 2.5}}

soln1 = FindMinimum[{f1[x1, x2], g1[x1, x2], g2[x1, x2], g3[x1, x2], x1 >= 0, x2 >= 0}, {x1, x2}]1 = FindMinimum[{f1[x1, x2], g1[x1, x2], g2[x1, x2], g3[x1, x2], x1 >= 0, x2 >= 0}, {x1, x2}]
{430., {x1 -> 1.5, x2 -> 2.5}}
f1[x1, x2] /. soln1[[2]]
430.
and using LinearOptimization command:
soln2 = LinearOptimization[ f1[x1, x2], {g1[x1, x2], g2[x1, x2], g3[x1, x2], x1 >= 0, x2 >= 0}, {x1, x2}, Method -> "Simplex"]
{x1 -> 3/2, x2 -> 5/2}
f1[x1, x2] /. soln2
430
Therefore, we know that the minimum is attained at the point
\[ x_1 = \frac{3}{2}, \quad x_2 = \frac{5}{2}, \qquad \mbox{the minimum cost:}\quad 430 . \]
The objective function f(x₁, x₂), when equated to a constant, defines a line. Upon using several constants, we obtain several parallel lines that may be derived using several arbitrary values, then plotted. The dashed line for the optimal solution to the objective function passes through the optimal point in purple.
pts2 = Show[ Graphics[{Red, PointSize -> Large, Point[#]}] & /@ {(*pt1,*)pt2, pt3, pt4}]; optMin = Show[Graphics[{Purple, PointSize[.05], Point[pt1]}]]; cntrPlt2 = ContourPlot[{f1[x1, x2] == 430, f1[x1, x2] == 660, f1[x1, x2] == 880}, {x1, 0, 6}, {x2, 0, 5}, ContourStyle -> {{Dashed, Purple}, Red, Green}, Epilog -> {Rotate[{Purple, Text["f(x1,x2)=430", {1.8, 1.9}], Red, Text["f(x1,x2)=660", {2.7, 3.5}], Red, Text["f(x1,x2)=880", {2.5, 4.9}]}, -50 Degree]}]; cntrPlt2a = ContourPlot[{f1[x1, x2] == 430, f1[x1, x2] == 660, f1[x1, x2] == 880}, {x1, 0, 6}, {x2, 0, 5}, ContourStyle -> {Purple, Red, Green}, Epilog -> {Rotate[{Purple, Text["f(x1,x2)=430", {1.8, 1.9}], Red, Text["f(x1,x2)=660", {2.7, 3.5}], Red, Text["f(x1,x2)=880", {2.5, 4.9}]}, -50 Degree]}]; Labeled[cntrPlt2a, "Figure 12.4\nSeveral graphs of the Objective Function"]; Labeled[Show[cntrPlt2, regPlt1, pts2, optMin], "Figure 12.5\nMinimizing solution"]; GraphicsRow[{%%, %}]
Labeled[Show[cntrPlt2, regPlt1, pts1], "Figure 12.5\nMinimizing solution"]
regPlt1 = RegionPlot[{2 x1 + 2 x2 >= 8 && 5 x1 + 3 x2 >= 15 && 3 x1 + 8 x2 >= 17 && x1 >= 0 && x2 >= 0}, {x1, 0, 6}, {x2, 0, 5}]
Show[regPlt1, cntrPlt2,arr3]

Figure 12.4: Several graphs of the objective function.
     
Figure 12.5: Minimizing solution.

 

The Dual Problem


Now we formulate the dual problem: maximize the objective function       f(y₁, y₂, y₃) = 8 y₁ + 15 y₂ + 17 y₃    (note this is a function of three variables)   subject to only two constraints:

\[ \begin{split} g_1 (y_1 , y_2 , y_3 ) &= 2\, y_1 + 5\,y_2 + 3\, y_3 \leqslant 120 , \\ g_2 (y_1 , y_2 , y_3 ) &= 2\,y_1 + 3\,y_2 + 8\, y_3 \leqslant 100. \end{split} \]
It is convenient to rewrite these constraints in vector form:
\[ {\bf y}\,{\bf A} \leqslant {\bf c} , \qquad {\bf A} = \begin{bmatrix} 2 & 2 \\ 5 & 3 \\ 3 & 8 \end{bmatrix}, \quad {\bf c} = \begin{pmatrix} 120 \\ 100 \end{pmatrix} , \quad {\bf y} = \left[ y_1 , y_2 , y_3 \right] . \]
We solve the dual problem with the aid of Mathematica.
(* Objective function*)
f2[x_, y_, z_] := 8 x + 15 y + 17 z;
gg1[y1_, y2_, y3_] := 2 y1 + 5 y2 + 3 y3 <= 120;
gg2[y1_, y2_, y3_] := 2 y1 + 3 y2 + 8 y3 <= 100;
FindMaximum[{f2[y1, y2, y3], gg1[y1, y2, y3], gg2[y1, y2, y3], y1 >= 0, y2 >= 0, y3 >= 0}, {y1, y2, y3}]
{430., {y1 -> 35., y2 -> 10., y3 -> 0.}}
Hence, we know that the maximum value is attained at the same value of the objective function as we found for the primal:
\[ y_1 = 35, \qquad y_2 = 10 , \quad y_3 = 0 \qquad\quad\mbox{with maximum profit:} \quad 430. \]
We also check the answer with the Gauss elimination procedure:
AA = {{2, 5, 3, 120}, {2, 3, 8, 100}, {8, 15, 17, 430}}; MatrixForm[AA]
\( \displaystyle \begin{pmatrix} 2 & 5 & 3 & 120 \\ 2 & 3 & 8 & 100 \\ 8 & 15 & 17 & 430 \end{pmatrix} \)
bb = Last[Transpose[AA]];
MatrixForm[bb]
\( \displaystyle \begin{pmatrix} 120 \\ 100 \\ 430 \end{pmatrix} \)
y = {y1, y2, y3};
MatrixForm[y]
\( \displaystyle \begin{pmatrix} y1 \\ y2 \\ y3 \end{pmatrix} \)
RowReduce[AA]
{{1, 0, 0, 35}, {0, 1, 0, 10}, {0, 0, 1, 0}}
Although Mathematica can solve this problem, here is the geometric approach. First, we plot the feasible region (polyhedron):

Figure 12.6: Figure 12.1: Feasible region.
     
pt6 = Graphics3D[{Red, PointSize[.03], Point[{y1, y2, y3}]} /. soln3[[2]]]; cntrPlt3 = ContourPlot3D[{8 *y1 + 15*y2 + 17*y3 == 430, 2*y1 + 5*y2 + 3*y3 == 120, 2*y1 + 3*y2 + 8*y3 == 120}, {y1, 0, 70}, {y2, 0, 50}, {y3, 0, 50}, AxesLabel -> {"y1", "y2", "y3"}, Ticks -> {{35}, {10}, Automatic}, FaceGrids -> {{{0, 0, -1}, {{35}, {10}}}}, FaceGridsStyle -> Directive[Orange, Thick, Dashed], PlotLegends -> {"Obj Fct", "constraint-1", "constraint-2\n(because of viewpoint\nshows as only a line)"}, ViewPoint -> {1.3244723188775946`, -3.1106692576486226`, 0.13967765049131722`}, ContourStyle -> {{Red, Opacity[0.3]}, {Blue, Opacity[0.3]}, {Green, Opacity[0.3]}}]; Labeled[Show[cntrPlt3, pt6], "Figure 12.6: Optimum point (red), Objective function\nand \ Constraint planes"]

Figure 12.7: Optimum point (green) at edge of feasible region.
     
regPlt2 = RegionPlot3D[{2 y1 + 5 y2 + 3 y3 <= 120 && 2 y1 + 3 y2 + 8 y3 <= 100 && y1 >= 0 && y2 >= 0 && y3 >= 0}, {y1, 0, 70}, {y2, 0, 25}, {y3, 0, 15}, AxesLabel -> {"y1", "y2", "y3"}, Ticks -> {{35}, {10}, Automatic}, FaceGrids -> {{{0, 0, -1}, {{35}, {10}}}}, FaceGridsStyle -> Directive[Orange, Thick, Dashed], ViewPoint -> {2.4853814757635058`, -1.8372343399481694`, 1.3774791831627913`}]; Labeled[Show[regPlt2, pt6], "Figure 12.7: Optimum point (red) at edge of feasible region"]

Reconciling the two


In our example, the Primal represents the buyer who wishes to minimize his cost. He needs two inputs, x1 and x2, each with constraints imposed by others (perhaps these are different suppliers with different minimum order sizes): buy two of each and the total cost is not less than 8 (constraint g1); buy 5 x1 and 3 x2 and the total cost is not less than 15 (constraint g2); or buy 3 x1 and 8 x2 and the total cost is not less than 17 (constraint g3). Our buyer needs at least one of each (neither x1 or x2 may be zero). The resulting matrices represents information about this system. That information, together with the minimizing techniques above, tell the buyer that portions 3/2 of x1 and 5/2 of x2 is the least overall cost. When x1 costs $120 and x2 cost $100, the optimal combination at those portions equals $430:
120*3/2 + 100*5/2
430
The Dual is from the prospective of the seller of x1 and x2. His goal is to maximize his revenue. His products are complements of one another. They may be acquired in any combination of two. Thus, his sales of product y are any pair selected from the set {y1, y2, y3}, permitting one of y to be zero. Shipping is accomplished through others who, in turn, impose constraints (gg1 and gg2). Using the same techniques, this time applying maximization, we determine that his optimal production combination is to produce 35 y1, 10 y2 and zero y3. At prices $8 for y1 and $15 for y2 we have the same answer, $430, for the Dual we had for the Primal:
8*35 + 15*10
430
End of Example 12

 

  1. Axler, Sheldon Jay (2015). Linear Algebra Done Right (3rd ed.). Springer. ISBN 978-3-319-11079-0.
  2. Halmos, Paul Richard (1974) [1958]. Finite-Dimensional Vector Spaces (2nd ed.). Springer. ISBN 0-387-90093-4.
  3. Katznelson, Yitzhak; Katznelson, Yonatan R. (2008). A (Terse) Introduction to Linear Algebra. American Mathematical Society. ISBN 978-0-8218-4419-9.
  4. Treil, S., Linear Algebra Done Wrong.
  5. Wikipedia, Dual space/