Fermat’s interest in the unification of geometry and algebra arose because of his involvement in optics. His interest in the attainment of maxima and minima—thus his contribution to calculus—stemmed from the investigation of the passage of light rays through media of different indices of refraction, which resulted in Fermat’s principle in optics and the law of refraction. With the introduction of coordinates, Fermat was able to quantify the study of optics and set a trend to which all physicists of posterity would adhere. It is safe to say that without analytic geometry the progress of science, and in particular physics, would have been next to impossible.
Born into a family of tradespeople, Pierre de Fermat (1607--1665)was trained as a lawyer and made his living in this profession becoming a councilor of the parliament of the city of Toulouse. Although mathematics was but a hobby for him and he could devote only spare time to it, he made great contributions to number theory, to calculus, and, together with Pascal, initiated work on probability theory.
The coordinate system introduced by Fermat was not a convenient one. For one thing, the coordinate axes were not at right angles to one another. Furthermore, the use of negative coordinates was not considered.
René Descartes (1596--1650) was a great philosopher, a founder of modern biology, and a superb physicist and mathematician. His interest in mathematics stemmed from his desire to understand nature. His father, a relatively wealthy lawyer, sent him to a Jesuit school at the age of eight where, due to his delicate health, he was allowed to spend the mornings in bed, during which time he worked. He followed this habit during his entire life. At twenty he graduated from the University of Poitier as a lawyer and went to Paris where he studied mathematics with a Jesuit priest. After one year he decided to join the army of Prince Maurice of Orange in 1617. During the next nine years he vacillated between various armies while studying mathematics.
René eventually returned to Paris, where he devoted his efforts to the study of optical instruments motivated by the newly discovered power of the telescope. In 1628 he moved to Holland to a quieter and freer intellectual environment. There he lived for the next twenty years and wrote his famous works. In 1649 Queen Christina of Sweden persuaded Descartes to go to Stockholm as her private tutor. However the Queen had an uncompromising desire to draw curves and tangents at 5 a.m., causing Descartes to break the lifelong habit of getting up at 11 o’clock! After only a few months in the cold northern climate, walking to the palace for the 5 o’clock appointment with the queen, he died of pneumonia in 1650.
Throughout the seventeenth century, mathematicians used one axis with the y values drawn at an oblique or right angle onto that axis. Newton, however, in a book called "The Method of Fluxions and Infinite Series" written in 1671, and translated much later into English in 1736, describes a coordinate system in which points are located in reference to a fixed point and a fixed line through that point. This was the first introduction of essentially the polar coordinates we use today.
Coordinate Systems
Most likely, you are more comfortable working with the vector spaces ℝn or ℂn and their subspaces than with other types of vectors spaces and subspaces. The objective of this section is to impose coordinate systems on arbitrary vector space even if it is not your lovely 𝔽n. In this section, you will learn that for finite dimensional vector spaces, no loss of generality results from restricing yourself to the space 𝔽n.Ordered Bases
The vector [2, 3, −1] in ℝ³ can be expressed in terms of the standard basis vectors as 2e₁ + 3e₂ − e₃ or, in more familiar form as 2i + 3j − k. The components of [2, 3, −1] are precisely the coeficients of these basis vectors. The vector [2, 3, −1] is different from the vector [3, 2, −1] just as the point (2, 3, −1) is different frm the point (3, 2, −1). We regard the standard basis vectors as havng natural order e₁ = [1, 0, 0] = i, e₂ = [0, 1, 0] = j and e₃ = [0, 0, 1] = k. In nonzero vector space V with basis β = {b₁, e₂, … , en}, there is usually no natural order for the basic vectors. When order does not matter, mathematicians embrace a set in curly brackets. For example, vectors {j, i, k} also form a basis in ℝ³ (called the left-handed). If we want the vectors to be displayed in some order, we must specify their order. For eample, there are 3! = 6 possible ordered presentations of basic vectors:Coordinatization
Let V be a finite dimensional vector space over field 𝔽, and let β = [b₁, b₂, … , bn] be an ordered basis for V. We know from section that every vector x in V can be expressed in the formTheorem 1: Let V be a vector space over field 𝔽 and let β = {b1, b2, … . bn} be a basis of V. For every vector x in V, there exists a unique set of scalars c1, c2, … , cn such that Eq.(1) holds.
Note that by definition, a basis of a vector space is a set of vectors that generate the space. In order to use a basis for coordinate system, we need to order the basis and consider it as a list of vectors. Then coordinates of the vector follow the prescribed order of basis vectors. Otherwise, the summands in Eq.(1) can be reordered without altering the final answer---summation of vectors is commutative.
A coordinate vector [x]β can be written as a vector column (∈ 𝔽n×1), or vector row (∈ 𝔽1×n), or as an n-tuple (∈ 𝔽n). When matrix multiplication is involved, column notation is preferable for using coordinate vectors.
First, we verify that polynomials from the set β are linearly independent. Let c₁, c₂, c₃ be scalars such that \[ c_1 \left( x - x^2 \right) + c_2 \left( 2 + x \right) + c_3 \left( 3 + x^2 \right) = 0 . \] Then \[ \left( 2\,c_2 + 3\, c_3 \right) + \left( c_1 + c_2 \right) x + \left( -c_1 + c_3 \right) x^2 = 0 . \] This implies that \[ \begin{split} 2\,c_2 + 3\, c_3 &= 0 , \\ c_1 + c_2 &= 0 , \\ -c_1 + c_3 &= 0, \end{split} \] the solution to which is c₁ = c₂ = c₃ = 0 because the matrix of the system \[ \begin{bmatrix} 0 & 2 & 3 \\ 1 & 1 & 0 \\ -1 & 0 & 1 \end{bmatrix} \] is invertible (its determinant is 1). Hence, polynomials in set β are linearly independent and the set is basis for ℝ≤2[x].
Since set β is basis, we can expand any polynomial of degree at most 2 into linear combination of polynomials from β. For example, \[ \left( 1 + x \right)^2 = c_1 \left( x - x^2 \right) + c_2 \left( 2 + x \right) + c_3 \left( 3 + x^2 \right) . \] This relation is valid when \[ \begin{split} 1 &= 2\,c_2 + 3\, c_3 , \\ 2 &= c_1 + c_2 , \\ 1 &= -c_1 + c_3 . \end{split} \] Mathematica easily solve this system of equations
Let us find the coordinate vector of v = (8, 1, −7) with respect to this basis written as a list of the given three vectors β = {b₁, b₂, b₃}. So we need to find scalars c₁, c₂, c₃ such that \[ {\bf v} = c_1 {\bf b}_1 + c_2 {\bf b}_2 + c_3 {\bf b}_3 . \] Writing vectors in column form, we obtain \[ \begin{bmatrix} \phantom{-}8 \\ \phantom{-}1 \\ -7 \end{bmatrix} = c_1 \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix} + c_2 \begin{bmatrix} \phantom{-}1 \\ -1 \\ \phantom{-}2 \end{bmatrix} + c_3 \begin{bmatrix} \phantom{-}3 \\ \phantom{-}2 \\ -1 \end{bmatrix} , \] which we can rewrite in matrix form \[ \begin{bmatrix} \phantom{-}8 \\ \phantom{-}1 \\ -7 \end{bmatrix} = \begin{bmatrix} 1&\phantom{-}1&\phantom{-}3 \\ 2&-1&\phantom{-}2 \\ 3&\phantom{-}2& -1 \end{bmatrix} \,\begin{bmatrix} c_2 \\ c_2 \\ c_3 \end{bmatrix} , \qquad \mbox{with} \quad {\bf A} = \left[ {\bf b}_1 \ {\bf b}_2 \ {\bf b}_3 \right] = \begin{bmatrix} 1&\phantom{-}1&\phantom{-}3 \\ 2&-1&\phantom{-}2 \\ 3&\phantom{-}2& -1 \end{bmatrix} . \] This allows us to rewrite the linear system in compact form: \[ {\bf A}\, {\bf c} = {\bf k}, \qquad \mbox{where} \quad {\bf c} = \begin{bmatrix} c_1 \\ c_2 \\ c_3 \end{bmatrix} , \quad {\bf k} = \begin{bmatrix} \phantom{-}8 \\ \phantom{-}1 \\ -7 \end{bmatrix} . \] With the aid of Mathematica, we find the inverse matrix and apply it to the vector k:
Coordinate vectors are strictly related to a basis imposed in a vector space. Any finite dimensional vector space is isomorphic to 𝔽n, and any chosen basis establishes such bijection to 𝔽n, where 𝔽 is a field of the given vector space. In some cases, we need to verify that vectors { b1, b2, … , bn } form a basis in ℝn (we mostly use real numbers as a field). In other words, we need to verify that these vectors are linearly independent, which is equivalent to condition that the system A x = 0 has only trivial solution, where A = [b1, b2, … , bn]. This in turn is equivalent to the matrix A being of full column rank n or being invertible.
=========================== to be checked
If the vectors \( \left\{ {\bf u}_1 , {\bf u}_2 , \ldots , {\bf u}_n \right\} \) form a basis for a vector space V, then every vector in V can be uniquely expressed in the form
Orthogonal Coordinate Systems
Although orthogonality is a opic of Part 5 of this tutorial, we discuss orthogonal systems over here because of its importance. Most likely you are familiar with dot product of two vectors, denoted by dot:Linear Algebra for Computational Science and Engineering by F Nari page 158
-
Find the coordinates of v with respect to the following bases:
- v = (2, −1), basis (2, 3), (3, −2) of ℝ².
- v = 1 − x², basis 1 + x, 2 −x, x −2x² of ℝ≤2[x].
- v = (2, −1), basis (1 + j, −1), (1, 1 −j) of ℂ².
- \( \displaystyle {\bf v} = \begin{bmatrix} 1&2 \\ 2&4 \end{bmatrix} , \) basis \( \displaystyle \begin{bmatrix} 0&2 \\ 2&3 \end{bmatrix} , \quad \begin{bmatrix} 2&0 \\ 0&1 \end{bmatrix} , \quad \begin{bmatrix} 0&0 \\ 0&1 \end{bmatrix} , \) of space of real symmetric 2×2 matrices.
-
Find the coordinates of v with respect to the following bases:
- v = (1, 2, 3), basis (−1, 0, 1), (2, 1, 0), (3, 2, 1) od ℝ³.
- \( \displaystyle {\bf v} = \begin{bmatrix} 1&2 \\ -2&3 \end{bmatrix} , \) basis \( \displaystyle \begin{bmatrix} 3&4 \\ -4&3 \end{bmatrix} , \quad \begin{bmatrix} -1&1 \\ -1&2 \end{bmatrix} , \quad \begin{bmatrix} 3&3 \\-3&3 \end{bmatrix} , \) of the space of real skew symmetric 2×2 matrices.
- v = (1 + j, 1, 1-− j), basis (1, j, −1), (j, 1, −2), (0, 1, 2j) of ℂ³.
- v = (1 + c)², basis 1 −x², 2 + x, 3x + x². of ℝ≤2[x].
- Let V = span(v₁, v₂) , where v₁ = (1 − x)², v₂ = 2x + x². Find coordinates of u = 2 −10x −x² in V.
- Let Ei,j be a matrix with a one in the (i, j)th entry and zeros elsewhere. Which 2 × 2 matrices Ei,j can be added to the set below to form a basis of ℝ2×2: \[ {\bf A} = \begin{bmatrix} 0&1 \\ -1&0 \end{bmatrix} , \quad {\bf B} = \begin{bmatrix} 0&1 \\ 1&1 \end{bmatrix} , \quad {\bf C} = \begin{bmatrix} 1&1 \\ 0&0 \end{bmatrix} . \]
- Let Ei,j be a matrix with a one in the (i, j)th entry and zeros elsewhere. Which 2 × 2 matrices Ei,j can be added to the set below to form a basis of ℝ2×2: \[ {\bf A} = \begin{bmatrix} 1&1 \\ 1&0 \end{bmatrix} , \quad {\bf B} = \begin{bmatrix} 0&1 \\ 1&0 \end{bmatrix} , \quad {\bf C} = \begin{bmatrix} 1&1 \\ 1&1 \end{bmatrix} . \]
- Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International
- Fraleigh, J.B. and Beauregard, R.A., Linear Algebra, third edition, Addison Wesley Publication Company, New York, 1995.