Recall that we denote by 𝔽m×n the vector space of all m-byn matrices with entries from field 𝔽, which is either ℤ, the set of integers, or ℚ, the set of rational numbers, or ℝ, the set of real numbers, or ℂ, the set of complex numbers.
Manipulations with vectors
Usually, an n-tuple (x₁, x₂, … , xn), written in parenthesis and a row vector [x₁, x₂, … , xn], written in brackets, look as the same object to human eyes. One of the pedagogical virtues of any software package is its requirement to pay close attention to the types of objects used in specific contexts. In particular, the system Python treats these two versions of a vector differently because
If we work only with vectors as one dimension arrays, numpy is good for their representations. The operations of addition of vectors and multiplication by scalars work as expected. When we want to work with matrix multiplication, it's better to define the dimension of the vectors explicitly. Below we will show how vectors works in numpy first. Then we introduce matrix and matrix opearitions in numpy.
Comment: break code into three parts: (1) define n-tuple ∈ 𝔽n as x = (1, 2, 3)
(2) define row vector as y = [10, 20, 30](3) define column vector z = [100, 200, 300]
append their dimensions!!
xxxxxxxxxx
import numpy as np
import pprint
x = np.array([1, 2])
print(f'The dimension of x:{x.shape}')
y = np.array([10, 20])
print(f'The dimension of y:{y.shape}')
w = x + y
pprint.pprint(w)
alpha = 2
v = alpha * x
pprint.pprint(v)
When we work with matrices, it is better to define the dimension of the row vector and column vector explicitly. Otherwise, we will get unexpected values. Matrix multiplication use the @ peration.
xxxxxxxxxx
import numpy as np
import pprint
row = np.array([[1, 2]])
print('Row vector:')
pprint.pprint(row)
print(f'The dimension of row vector:{row.shape}')
col = np.array([[1], [2]])
print('Column vector:')
pprint.pprint(col)
print(f'The dimension of column vector:{col.shape}')
A = np.array([[2, 2],
[3, 3]])
print('Matrix A:')
pprint.pprint(A)
print(f'The dimension of matrix A: {A.shape}')
z = A @ col
print('Result vector z of matrix A multiply column vector:')
pprint.pprint(z)
print(f'The dimension of vector z:{z.shape}')
Row vector multiply column vector as inner product calculation. Column vector multiply row vector as outter procuct calculatin.
xxxxxxxxxx
import numpy as np
import pprint
row = np.array([[1, 2]])
print('Row vector:')
pprint.pprint(row)
print(f'The dimension of row vector:{row.shape}')
col = np.array([[1], [2]])
print('Column vector:')
pprint.pprint(col)
print(f'The dimension of column vector:{col.shape}')
w = row @ col
pprint.pprint(w)
print(f'The dimension of result of row vector multiply column vector: {w.shape}')
v = col @ row
pprint.pprint(v)
print(f'The dimension of result of column vector multiply row vector: {v.shape}')
Transpose and conjugate transpose in Python package numpy
xxxxxxxxxx
import numpy as np
import pprint
A = np.array([[1, 1],[2, 2]])
pprint.pprint(A)
pprint.pprint(A.T)
B = np.array([[1 + 1j, 1 + 1j], [2 + 2j, 2+ 2j]])
pprint.pprint(B)
pprint.pprint(np.conjugate(B).T)
To extract partitioned matrix, we use row and column index. Pay attention that matrix in numpy starts row and column index from zero, not one. Below we show how to extract row, column and block from matrix defined in numpy. Please note that the ending index after : is not included. If there is no index number before or after :, it means from the start or to the end respectively.
xxxxxxxxxx
import numpy as np
import pprint
A = np.arange(1, 16).reshape((3,5))
pprint.pprint(A)
print('The second row:')
pprint.pprint(A[[1], :])
print('Part of a row:')
pprint.pprint(A[0:3, :])
print('The second column:')
pprint.pprint(A[:, [1]])
print('Part of a column:')
pprint.pprint(A[:, 2:4])
print('Block from row 1 to row 2, and column 3 to column 5:')
pprint.pprint(A[0:2, 2:5])
xxxxxxxxxx
import numpy as np
x = np.array([[1], [2]]) # one column matrix
y = np.array([[1, 2]]) # one row matrix
z = np.array([1, 2]).reshape(2, 1) # one column matrix
x[0, 0] # extracting the first component
x[1, 0] # extracting the second component
y[0, 0] # extracting the first component
y[0, 1] # extracting the second component
Take x = (1, 2, 3, 4, 5) → x1 = (1, 2, 3) and x2 = (4, 5)
then unite these vectors x1 and x2 into one vector x.
Repeat with row vwcor y = (10, 20, 30, 40, 50) and break in into y1 and y2
Repeat with column vecort z = [100, 200, 300, 400, 500] ■
Partition Matrices
A block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices.
Partitoned matrices appear in most modern applications of linear algebra because the notation highlights essential structures of matrices. Partititoning a matrix is a generalization to used previously a list of columns or rows. Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines, which break it up, or partition it, into a collection of smaller matrices. Especially when dimensions of a matrix are large, it may be beneficial to view a matrix as combined from smaller submatrices. If we simultaneously partition adjacent rows and adjacent columns of a matrix into groups, this partitions the matrix into submatrices or blocks, resulting in a representation of the matrix as a partitioned or block matrix.
If matrices A and B are the same size and are partitioned in exactly the same way, then it is natural to make the same partition of the ordinary matrix sum A + B, and sum corresponding blocks. Similarly, one can subtract the partitioned matrices. Multiplication of a partitioned matrix by a scalar is also computed block by block.
It is possible to use a block partitioned matrix product that involves only algebra on submatrices of the factors. The partitioning of the factors is not arbitrary, however, and requires "conformable partitions" between two matrices A and B such that all submatrix products that will be used are defined in usual row-column rule.
xxxxxxxxxx
M=matrix(c(3,2,1,4,3,-1,1,3,1,-2,2,-1,-2,-1,1,-5,3,1,1,2,1,2,3,2,4),nrow=5,ncol=5)
M
M = {{ 1, -2, 3, -1, 4},{3, 1, -2, 4, -2},{5, 4, -3, 1, 1},{2, -3, 4, 2, -3}}
B = {{6}, {-3}, {1}, {4}, {-1}}
M.B

The Schur complement is named after Issai Schur (1875--1941), who introduced it in 1917 (I. Schur, Potenzreihen im Innern des Einheitskreises, J. Reine Angew. Math., 147, 1917, 205–232). The USA mathematician Emilie Virginia Haynsworth (1916--1985) was the first in 1968 paper to call it the Schur complement. The Schur complement is a key tool in the fields of numerical analysis, statistics and matrix analysis.
Issai Schur was a Russian mathematician (he was born in Mogilev, now Belarus) who worked in Germany for most of his life. Issai spoke German without a trace of an accent, and nobody even guessed that it was not his first language. He obtained his doctorate in 1901, became lecturer in 1903 and, after a stay at the University of Bonn, professor in 1919. As a student of Ferdinand Georg Frobenius (1849--1917), he worked on group representations. He is perhaps best known today for his result on the existence of the Schur decomposition, which we will discuss later.
In 1922 Schur was elected to the Prussian Academy, proposed by Planck, the secretary of the Academy. From 1933 events in Germany made Schur's life increasingly difficult. Schur considered himself as a German, not a Jew, but the Nazis had different opinion. Later in 1935 Schur was dismissed from his chair in Berlin but he continued to work there suffering great hardship and difficulties. Schur left Germany for Palestine in 1939, broken in mind and body, having the final humiliation of being forced to find a sponsor to pay the 'Reichs flight tax' to allow him to leave Germany. Without sufficient funds to live in Palestine he was forced to sell his beloved academic books to the Institute for Advanced Study in Princeton.
xxxxxxxxxx
import numpy as np
#numpy is required for schur calculations:
# it can be easily installed using the steps provided
# on https://scipy.org/install.html
import scipy.linalg
#scipy is required for lu decomp:
# it can be easily installed the same way as numpy
def schur():
M = ([[3,-1,2,-5,1],[2,1,-1,3,2],[1,3,-2,1,3],[4,1,-1,1,2],[3,-2,1,2,4]])
A = ([[3,-1],[2,1]])
B = ([[2,-5,1],[-1,3,2]])
C = ([[1,3],[4,1],[3,-2]])
D = ([[-2,1,3],[-1,1,2],[1,2,4]])
MAinv = np.linalg.inv(np.subtract(D,(np.matmul(C,np.matmul(np.linalg.inv(A),B)))))
MA = np.subtract(D,(np.matmul(C,np.matmul(np.linalg.inv(A),B))))
print("MA: ")
print(np.subtract(D,(np.matmul(C,np.matmul(np.linalg.inv(A),B)))))
print()
print("2x3")
print(-1*np.matmul(np.linalg.inv(A),np.matmul(B,MAinv)))
print()
print("3x3")
print(np.linalg.inv(np.subtract(D,(np.matmul(C,np.matmul(np.linalg.inv(A),B))))))
print()
print("2x2")
print(np.linalg.inv(M))
print()
print("3x2")
print(-1*np.matmul(MAinv,np.matmul(C,np.linalg.inv(A))))
M = {{1, -2, 3, -1, 4}, {3, 1, -2, 4, -2}, {5, 4, -3, 1, 1}, {2, -3,
4, 2, -3}, {0, 1, 3, -1, 2}}
DD = {{-3, 1, 1}, {4, 2, -3}, {3, -1, 2}}
A = {{1, -2}, {3, 1}}
B = {{3, -1, 4}, {-2, 4, 2}}
CC = {{5, 4}, {2, -3}, {0, 1}}
MA = (DD - CC.Inverse[A].B)*7
MD = (A - B.Inverse[DD].CC)*3
Block Matrix Determinant
For a block matrix M=[ABCD] with square matrices A and D, we have
A1 = {{3, 1}, {5, 2}}
A2 = {{1, 2, 3}, {2, 3, 4}}
A3 = {{-1, 1}, {2, 3}, {3, 4}}
A4 = {{2, 1, -1}, {1, 1, 1}, {2, 3, 4}}
M = ArrayFlatten[{{A1, A2}, {A3, A4}}]
MA = A4 - A3.Inverse[A1].A2
Det[MA]
MD = A1 - A2.Inverse[A4].A3
Det[MD]
Block Matrix Inversion
To find the inverse matrix to a block matrix M=[ABCD], we have
A = {{3, 5}, {1, 2}}
B = {{2, -1, 3}, {-1, 1, 2}}
CC = {{-3, 2}, {1, -3}, {2, 1}}
DD= {{2, 4, -6}, {-3, -11, 18}, {-2, -8, 13}}
MA = DD - CC.Inverse[A].B
MD = A - B.Inverse[DD].CC
Inverse[MD]
Inverse[MA]
-Inverse[MD].B.Inverse[DD]
-Inverse[DD].CC.Inverse[MD]
Inverse[DD] + Inverse[DD].CC.Inverse[MD].B.Inverse[DD]
Inverse[A] + Inverse[A].B.Inverse[MA].CC.Inverse[A]
-Inverse[MA].CC.Inverse[A]
-Inverse[A].B.Inverse[MA]
Out[11]= {{82/1089, 134/363, -(53/99)}, {7/363, 38/121, -(17/33)}, {2/33, 3/11, -(1/3)}}
Out[12]= {{46/363, -(2/363)}, {-(74/1089), 19/1089}}
Out[13]= {{395/1089, -(175/1089)}, {47/363, 140/363}, {4/33, 7/33}}
Out[14]= {{-(109/363), -(4/121), -(4/33)}, {128/1089, -(83/363), 38/99}}
M = {{3, 5, 2, -1, 3}, {1, 2, -1, 1, 2}, {-3, 2, 2, 4, -6}, {1, -3, 3, -11, 18}, {2, 1, -2, -8, 13}}
Inverse[M]*3501
Block Diagonal Matrices
A block diagonal matrix is a block matrix that is a square matrix, and having main diagonal blocks square matrices, such that the off-diagonal blocks are zero matrices. A block diagonal matrix M has the formFor the determinant and trace, the following properties hold
A1 = {{3, 1}, {5, 2}}
A4 = {{2, 1, -1}, {1, 1, 1}, {2, 3, 4}}
zero23 = ConstantArray[0, {2, 3}]
zero32 = ConstantArray[0, {3, 2}]
A = ArrayFlatten[{{A1, zero23}, {zero32, A4}}]
Det[A]
Det[A1]
Det[A4]
Inverse[A]
Inverse[A1]
Inverse[A4]
Block Tridiagonal Matrices
A block tridiagonal matrix is another special block matrix, which is just like the block diagonal matrix a square matrix, having square matrices (blocks) in the lower diagonal, main diagonal and upper diagonal, with all other blocks being zero matrices. It is essentially a tridiagonal matrix but has submatrices in places of scalars. A block tridiagonal matrix M has the formDirect Sum
Direct Product