"Matrix" is the Latin word for womb. The origin of mathematical matrices has a long history. The term "matrix" in combinatorics was introduced in 1850 by James Joseph Sylvester. We start with aparticular of matrices---vectors.
A vector is a quantity that has magnitude and direction and that is commonly represented by a directed line segment whose length represents the magnitude and whose orientation in space represents the direction. In mathematics, it is always assumed that vectors can be added or subtracted, and multiplied by a scalar (real or complex numbers). It is also assumed that there exists a unique zero vector (of zero magnitude and no direction), which can be added/subtructed from any vector without changing the outcome. The zero vector is not zero. Wind, for example, had both a speed and a direction and, hence, is conveniently expressed as a vector. The same can be said of moving objects, momentum, forces, electromagnetic fields, and weight. (Weight is the force produced by the acceleration of gravity acting on a mass.)
The first thing we need to know is how to define a vector so it will clear to everyone. One of the common ways to do this is to introduce a system of coordinates, either Cartesian or any other, which includes unit vectors in each direction, usually referred to as an ordered basis. In engineering, traditionally these unit vectors are denoted by i (abscissa), j (ordinate), and k (applicat), called the basis. Once rectangular coordinates (or a basis) are set up, any vector can be expanded through these unit vectors. In three dimensional case, every vector can be expanded as \( {\bf v} = v_1 {\bf i} + v_2 {\bf j} + v_3 {\bf k} ,\) where \( v_1, v_2 , v_3 \) are called the coordinates of the vector v. Coordinates are always specified relative to an ordered basis. Therefore, every vector can be identified with an ordered n-tupple of n real numbers or coordinates. Coordinate vectors of finite-dimensional vector spaces can be represented by either a column vector (which is usually the case) or a row vector, Generally speaking, Mathematica does not distinguish column vectors from row vectors. It does when the user specifies it.
To define a matrix in R, we save a matrix as a variable, using the matrix() command. For example:
a<-matrix(c(1,2,3,4,5,6,7,8,9),nrow=3,ncol=3,byrow=TRUE)
[,1] [,2] [,3]
[1,] 1 2 3
[2,] 4 5 6
[3,] 7 8 9
Example 2.1.1:
b<-matrix(c(2,2,2,4,4,4,6,6,6),nrow=3,ncol=3,byrow=TRUE)
b
[,1] [,2] [,3]
[1,] 2 2 2
[2,] 4 4 4
[3,] 6 6 6
Append a vector of length n to a matrix of length n:
cbind(a,d)
d
[1,] 1 2 3 9
[2,] 4 5 6 9
[3,] 7 8 9 9
rbind(a,d)
[,1] [,2] [,3]
1 2 3
4 5 6
7 8 9
d 9 9 9
Return vector of row or column sum/mean:
rowMeans(a)
[1] 2 5 8
rowSums(a)
[1] 6 15 24
colMeans(a)
[1] 4 5 6
colSums(a)
[1] 12 15 18
Element-Wise Multiplication:
a * b
[,1] [,2] [,3]
[1,] 2 4 6
[2,] 16 20 24
[3,] 42 48 54
Matrix Multiplication:
a %*% b
[,1] [,2] [,3]
[1,] 28 28 28
[2,] 64 64 64
[3,] 100 100 100
Cross Product:
crossprod(a,b)
[,1] [,2] [,3]
[1,] 60 60 60
[2,] 72 72 72
[3,] 84 84 84
There is a special operation that transfers columns into row and vice versa: it is called transposition. The transpose of a matrix was introduced in 1858 by the British mathematitian Arthur Cayley (1821--1895). The transpose of a \( m \times n \) matrix A is another \( n \times m \) matrix \( {\bf A}^T \) (also denoted as \( {\bf A}' \) or \( {\bf A}^t \) ) created by any one of the following equivalent actions:
reflect A over its main diagonal (which runs from top-left to bottom-right) to obtain \( {\bf A}^T \)
write the rows of A as the columns of \( {\bf A}^T \)
write the columns of A as the rows of \( {\bf A}^T \)
Formally, the i th row, j th column element of AT is the j th row, i th column element of A:
A square matrix A is called symmetric if \( {\bf A} = {\bf A}^T . \) Let A and B be \( m \times n \) matrices and c be a scalar. Then we have the following properties for transpose matrices:
1. \( \left( {\bf A}^T \right)^T = {\bf A} \)
2. \( \left( {\bf A} + {\bf B} \right)^T = {\bf A}^T + {\bf B}^T \)
3. \( \left( {\bf A} \, {\bf B} \right)^T = {\bf B}^T \, {\bf A}^T \)
4. \( \left( c \, {\bf B} \right)^T = c\,{\bf B}^T \)
5. \( {\bf A}\, {\bf A}^T \) is a symmetric matrix.
Transpose:
t(a)
[,1] [,2] [,3]
[1,] 1 4 7
[2,] 2 5 8
[3,] 3 6 9
Create diagonal matrix with elements from vector:
First: we set up a vector to use. x<-c(1,2,3,4)
diag(x))
[,1] [,2] [,3] [,4]
[1,] 1 0 0 0
[2,] 0 2 0 0
[3,] 0 0 3 0
[4,] 0 0 0 4
Return vector containing elements of the principal diagonal:
diag(a)
[1] 1 5 9
a<-matrix(c(1,3,2,6,-3,5,2,7,8,4,2,6),nrow=3,ncol=4,byrow=T)
# Define a matrix with 12 elements, 3 rows, and 4 four columns filled by row.
a
# Show matrix a
[,1] [,2] [,3] [,4]
[1,] 1 3 2 6
[2,] -3 5 2 7
[3,] 8 4 2 6
diag(a[-1,])
[1] -3 4
#Returns second diagonal, below the first diagonal.
#Simply takes diagonal of matrix a with row 1 removed
diag(a[,-1])
[1] 3 2 6
#Returns second diagonal, above the first diagonal.
#Simply takes diagonal of matrix a with column 1 removed
Enter text here
We calculate the determinant of 10x10 matrix with random entries:a<-matrix(data=sample(seq(-1, 1, by = 0.001),100,replace=T),nrow=10,ncol=10,byrow=F)
#create 10x10 matrix with 100 random entries between -1 and 1
det(a)
#take determinant of matrix a
Another option is to create a null vector of length 1000. Then, I made a for loop to run 1000 iterations. Inside the loop, I create a 10x10 matrix as before, I take the determinant, and finally I save the determinant as the ith entry of the null vector. Finally, outside of the loop I take the mean of the vector containing each determinant to see what happens in this ideally.
d<-rep(NA, 1000)
#Create NA vector of length 1000
for(i in 1:1000) {
#Set for loop to run 1000 times
a<-matrix(data=sample(seq(-1, 1, by = 0.001),100,replace=T),nrow=10,ncol=10,byrow=F)
#create 10x10 matrix with 100 random entries between 0 and 100
d[i]<-det(a)
#take determinant of matrix a
}
mean(d)
#calculate average of vector d, the determinants
A<-matrix(c(1,2,-3,-2,3,4,3,4,-5),nrow=3,ncol=3,byrow=TRUE)
[,1] [,2] [,3]
[1,] 1 2 -3
[2,] -2 3 4
[3,] 3 4 -5
y<-eigen(a)
$values
[1] 4.551974 -4.336008 -1.215966
$vectors
[,1] [,2] [,3]
[1,] 0.1480738 0.5481056 -0.8342615
[2,] 0.8947936 -0.2802633 0.1157121
[3,] 0.4212109 0.7880563 -0.5390903
y$values
[1] 4.551974 -4.336008 -1.215966
y$vectors
[,1] [,2] [,3]
[1,] 0.1480738 0.5481056 -0.8342615
[2,] 0.8947936 -0.2802633 0.1157121
[3,] 0.4212109 0.7880563 -0.5390903
where A is a given matrix and b is specified column vector, may have a unique solution x, may have infinite many solutions, or may gave no solution at all.
We start with a simple case when A is a square matrix. If A is not singular (so its determinant is not zero), we can find its inverse and solve the vector equation.
A<-matrix(c(1,2,-3,-2,3,4,3,4,-5),nrow=3,ncol=3,byrow=TRUE)
[,1] [,2] [,3]
[1,] 1 2 -3
[2,] -2 3 4
[3,] 3 4 -5
b<-c(1,1,1)
solve(A,b)
[1] -0.6666667 0.3333333 -0.3333333
solve(a)
[,1] [,2] [,3]
[1,] -1.29166667 -0.08333333 0.70833333
[2,] 0.08333333 0.16666667 0.08333333
[3,] -0.70833333 0.08333333 0.29166667