Inverse Matrices I

Inverse Matrices II (Theory)

Inverse of Partitioned Matrices

Every m-by-n matrix A ∈ 𝔽m×n defines a linear transformation 𝔽n×1 ⇾ 𝔽m×1 by mapping every input x from domain 𝔽n×1 into output y = A x from codomain 𝔽m×1. Therefore, a matrix can be considered as a linear operator acting on finite dimensional column vector space. This transformation can be naturally extended to Cartesian product vector spaces, TA : 𝔽n ⇾ 𝔽m. The set of all outputs is called the range or image of the transformation.

Let X and Y be vector spaces over the same field 𝔽, and let T : XY be a linear transformation. If there exists a linear map L : YX such that (LT)(x) = L(T)(x) = x for all x in the domain X, then L is called a left inverse of T.
If there exists a linear map R : YX such that (TR)(y) = T(R)(y) = y for all y in the domain Y, then R is called a right inverse of T.
Note the duality of right and left inverses: if L is a left inverse of T, then T is a right inverse for L. A finite dimensional case of vector spaces leads to a similar definitions for matrices. Namely, if for matrix A ∈ 𝔽m×n there exist matrix L ∈ 𝔽n×m such that L A = In, the identity matrix of dimension n, then L is called a left inverse of A. If for a matrix R ∈ 𝔽n×m we have A R = Im, then R is said to be a right inverse of T.

 

Left inverse


Recall that m × n matrix A has full column rank if its columns are linearly independent; i.e., if matrix rank r = n. These matrices usually have "tall" shape. Tall full rank matrices only have left inverses. In this case when the number m of rows exceeds or equal to (mn) the number of columns n, the nullspace of A contains just the zero vector. The equation A x = b either has exactly one solution or is not solvable.

Mathematica code below serves to display any matrix as a matrix rather than the default display which is as a list of lists

$Post := If[MatrixQ[#1], MatrixForm[#1], #1] &

Theorem 1: Let X and Y be vector spaces over the same field 𝔽, and let T : XY be a linear transformation. The transformation T has left inverse if and only if it is 1-to-1, i.e., injective.

Suppose there exists a left inverse L of T. To see that T is injective, let u, vX such that T(u) = T(v). Then \[ {\bf u} = L \left( T({\bf u}) \right) = L \left( T({\bf v}) \right) = {\bf v} . \]

If T is not injective, then there exists v0 such that T(v) = 0, so T maps two distinct elements v and 0 into one (zero vector). Then the left inverse to T does not exist because L cannot restore v: \[ L \left( T({\bf v}) \right) = L \left( T({\bf 0}) \right) = {\bf 0} \ne {\bf v} . \]

Another constructive proof. Suppose that T is injective. Let α = {xi : iI} be a basis for X. Then β = {T(xi) : iI} is a linearly independent set that generates subspace UY. There exists a basis γ of Y that contains β. We construct a left inverse map L for T by the rule \[ L({\bf u}) = \begin{cases} {\bf v} , & \quad \mbox{if} \quad {\bf u} \in \mbox{span}\{ \beta \}, \quad \mbox{with} \quad {\bf u} = T({\bf v}) , \\ {\bf 0} , & \quad \mbox{if} \quad {\bf u} \not\in \mbox{span}\{\beta \} . \end{cases} \] Then for \( \displaystyle {\bf x} = \sum_i c_i {\bf x}_i \in X , \quad \) we have \[ L \left( T({\bf x}) \right) = L\,T \left( \sum_i c_i {\bf x}_i \right) = \sum_i c_i L\,T \left( {\bf x}_i \right) = \sum_i c_i {\bf x}_i = {\bf x} . \] Hence, L is a left inverse of T.

Example 1: We consider the 3 × 2 matrix \[ {\bf A} = \begin{bmatrix} 1&2 \\ 3&4 \\ 5&6 \end{bmatrix} . \]

 

To find all left inverse matrices, we need to solve the equation X A = I, i.e., \[ \begin{bmatrix} x_1 & x_2 & x_3 \\ x_4 & x_5 & x_6 \end{bmatrix} \cdot \begin{bmatrix} 1&2 \\ 3&4 \\ 5&6 \end{bmatrix} = \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix} \]

X = ({ {Subscript[x, 1], Subscript[x, 2], Subscript[x, 3]}, {Subscript[x, 4], Subscript[x, 5], Subscript[x, 6]} });
This leads to the matrix equation \[ \begin{bmatrix} x_1 + 3\, x_2 + 5\, x_3 & 2\, x_1 + 4\, x_2 + 6\, x_3 \\ x_4 + 3\, x_5 + 5\, x_6 & 2\, x_4 + 4\, x_5 + 6\, x_6 \end{bmatrix} = \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix} . \] So we have four equations with six unknowns: \[ \begin{split} x_1 + 3\, x_2 + 5\, x_3 &= 1 , \\ 2\, x_1 + 4\, x_2 + 6\, x_3 &= 0 , \\ x_4 + 3\, x_5 + 5\, x_6 &= 0, \\ 2\, x_4 + 4\, x_5 + 6\, x_6 &= 1 . \end{split} \] The corresponding augmented matrix is \[ \left[ {\bf A}^{\mathrm T} \mid {\bf I} \right] = \left[ \begin{array}{ccc|cc} 1&3&5&1&0 \\ 2&4&6&0&1 \end{array} \right] . \]
iMat = IdentityMatrix[2];
augMat = ArrayFlatten[{{Transpose[A], iMat}}]
\( \displaystyle \begin{pmatrix} 1&3&5&1&0 \\ 2&4&6&0&1 \end{pmatrix} \)
Actually, this system of four equations contains two unrelated systems (one for variables x₁, x₂, x₃ and another for variables x₄, x₅, x₆), and we solve each of them separately. To solve the first system of equations, we build the augmented matrix \[ {\bf M} = \left[ \begin{array}{ccc|c} 1&3&5&1 \\ 2&4&6&0 \end{array} \right] . \] Then we convert this matrix to the reduced row echelon form
augMat2 = ArrayFlatten[{{Transpose[A], ({ {1}, {0} })}}];
RowReduce[augMat2] M = {{1, 3, 5, 1}, {2, 4, 6, 0}};
(RowReduce[M]) // MatrixForm
\( \displaystyle \begin{pmatrix} 1&0&-1&-2 \\ 0&1&2&1 \end{pmatrix} \)
Reading last line, we obtain one equation with two unknowns (one of which cannot be determined, so it plays the role of a free variable) \[ x_2 + 2\, x_3 = 1 \qquad \Longrightarrow \qquad x_2 = 1 - 2\, x_3 . \] This form tells us that the system has infinite many solutions with one free variable, say x₃, which we denote by t. Then \[ x_1 = -2 +t, \quad x_2 = 1 - 2t , \quad x_3 = t, \qquad t \in \mathbb{R} . \] We check with another command:
soln1 = Solve[{x1 + 3*x2 + 5*t == 1, 2*x1 + 4*x2 + 6*t == 0}, {x1, x2}]
{{x1 -> -2 + t, x2 -> 1 - 2 t}}
soln1C = CoefficientList[soln1[[1, All, 2]], t]
\( \displaystyle \begin{pmatrix} -2&1 \\ 1&-2 \end{pmatrix} \)

we verify the answer:
soln1C . ({ {1}, {t} })
\( \displaystyle \begin{pmatrix} -2 +t \\ 1-2t \end{pmatrix} \)
and
({x1, x2} /. soln1)[[1]] == Flatten[soln1C . ({ {1}, {t} })]
True

Similarly, we use Mathematica to find the general solution of another set of variables (x₄, x₅, x₆):

soln2 = Solve[{x4 + 3*x5 + 5*s == 0, 2*x4 + 4*x5 + 6*s == 1}, {x4, x5}]
{{x4 -> 1/2 (3 + 2 s), x5 -> 1/2 (-1 - 4 s)}}
\[ x_4 = \frac{3}{2} + s , \quad x_5 = - \frac{1}{2} -2s, \quad x_6 = s, \qquad s\in \mathbb{R} . \]
soln2C = CoefficientList[soln2[[1, All, 2]], t];
({Subscript[x, 4], Subscript[x, 5]} /. soln2)[[1]] == Flatten[soln2C]
True
Therefore, the left inverse is not unique and depends on two real parameters: \[ {\bf P} = \begin{bmatrix} -2 & 1& 0 \\ \frac{3}{2} & -\frac{1}{2} &0 \end{bmatrix} + \begin{bmatrix} t & -2t & t \\ s &-2s& s \end{bmatrix} , \qquad t,s \in \mathbb{R} . \] We check our answer with Mathematica:
A = {{1, 2}, {3, 4}, {5, 6}}; P = {{-2, 1, 0}, {3/2, -1/2, 0}};
P.A
{{1, 0}, {0, 1}}
We also check that matrix \( \displaystyle \quad {\bf C} = \begin{bmatrix} t&-2t&t \\ s&-2s&s \end{bmatrix} \quad \) annihilates (here it means to make a zero matrix) A:
A = {{1, 2}, {3, 4}, {5, 6}} ;
c = {{t, -2*t, t}, {s, -2*s, s}} ;
c.A
{{0, 0}, {0, 0}}
End of Example 1

The n × n matrix ATA is symmetric and non-singular for every full column rank matrix A. Hence, (ATA)−1ATA = In, the identity matrix. In this case, matrix (ATA)−1AT is left inverse, which we denote by \( \displaystyle {\bf A}_{left}^{-1} . \) Note that this left inverse is not unique and there are could be many others.

Note that the left inverse is an n × m matrix, with mn. It can be also right inverse only when m = n. For a given left inverse \( \displaystyle {\bf A}_{left}^{-1} , \) adding a matrix C such that C A = 0 induces another left inverse.

 

Right inverse


If m × n matrix A has full row rank, then r = m. We will call such matrices "wide." The nullspace of AT contains only the zero vector; the rows of A are independent. The equation A x = b always has at least one solution; the nullspace of A has dimension nm, so there will be nm free variables and (if n > m) infinitely many solutions!

Theorem 2: Let X and Y be vector spaces over the same field 𝔽, and let T : XY be a linear transformation. The transformation T has right inverse if and only if it is onto, i.e., surjective.

If T is not onto, then its image is a proper subset of codomain Y. So there exists an vector yY that is not in the range of T is not onto, then its image is a proper subset of codomain Y. So there exists an vector yY that is not in the range of T. Then a right inverse does not restore y.

Now suppose that T is surjective. Let α = {xi : iI} be a basis of X. Then T(α) generates Y. Then there exits a subset β ⊆ T(α) that is a basis in Y. Now for every vector yi in β, yi = T(xi) for some vector xiX. The vector xi may not be unique, but that does not matter. Let R : YX be the linear transformation defined by R(yi) = xi. Then for each vector yi in the basis β of Y, T R(yi) = T(xi) = yi. This means that R is a right inverse of T.

On the other hand, suppose there is a linear transformation U : YX with T U = I. Then for any vector yY, \[ {\bf y} = I({\bf y}) = T\,R({\bf y}) = T\left( R({\bf y}) \right) \] and so T is surjective.

Example 2: We consider a wide 2-by-3 matrix \[ {\bf B} = \begin{bmatrix} 1&2&3 \\ 4&5&6 \end{bmatrix} . \] Its product with its transpose is a symmetric square matrix, \[ {\bf B}\, {\bf B}^{\mathrm T} = \begin{bmatrix} 14 & 32 \\ 32&77 \end{bmatrix} \qquad \Longrightarrow \qquad \left( {\bf B}\, {\bf B}^{\mathrm T} \right)^{-1} = \frac{1}{54} \begin{bmatrix} \phantom{-}77 & -32 \\ -32 & \phantom{-}14 \end{bmatrix} . \]
Clear[A, B, t, x, P];
B = {{1, 2, 3}, {4, 5, 6}};
B.Transpose[B]
{{14, 32}, {32, 77}}
Inverse[%]
{{77/54, -(16/27)}, {-(16/27), 7/27}}
But another symmetric matrix BTB is not invertible because its determinant is zero. \[ {\bf B}^{\mathrm T} {\bf B} = \begin{bmatrix} 17&22&27 \\ 22& 29& 36 \\ 27& 36& 45 \end{bmatrix} . \]
B = {{1, 2, 3}, {4, 5, 6}};
BtB = Transpose[B] . B;
SymmetricMatrixQ[BtB]
Det[BtB]
True
0
Then we can determine a right inverse \[ {\bf Q} = {\bf B}^{\mathrm T} \left( {\bf B}\,{\bf B}^{\mathrm T} \right)^{-1} = \frac{1}{18} \begin{bmatrix} -17& \phantom{-}8 \\ -2& \phantom{-}2 \\ \phantom{-}13&-4 \end{bmatrix} . \]
Q = Transpose[B].Inverse[B.Transpose[B]]
{{-(17/18), 4/9}, {-(1/9), 1/9}, {13/18, -(2/9)}}
Finally, we verify that 3×2 matrix Q is the right inverse to B:
B.Q
{{1, 0}, {0, 1}}

 

To find all right inverse matrices, we need to solve the equation B X = I, i.e., \[ \begin{bmatrix} 1&2&3 \\ 4&5&6 \end{bmatrix} \, \begin{bmatrix} x_1 & x_4 \\ x_2 & x_5 \\ x_3 & x_6 \end{bmatrix} = \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix} . \] This leads to the system of six linear equations \[ \begin{split} x_1 + 2\, x_2 + 3\, x_3 &= 1 , \\ 4\, x_1 + 5\, x_2 + 6\, x_3 &= 0 , \\ x_4 + 2\, x_5 + 3\, x_6 &= 0 , \\ 4\, x_4 + 5\, x_5 + 6\, x_6 &= 1 . \end{split} \] We ask Mathematica to solve this system:

Solve[{x1 + 2*x2 + 3*t == 1, 4*x1 + 5*x2 + 6*t == 0}, {x1, x2}] // FullSimplify
{{x1 -> 1/3 (-5 + 3 t), x2 -> -(2/3) (-2 + 3 t)}}
\[ x_1 - \frac{5}{3} +t , \quad x_2 = \frac{4}{3} - 2t , \quad x_3 = t, \qquad t \in \mathbb{R} . \] We repeat a similar procedure for other three variables (x₄, x₅, x₆).
Flatten[Solve[ 4 Subscript[x, 4] + 5 Subscript[x, 5] + 6 Subscript[x, 6] == 1, Subscript[x, 6]] /. % // FullSimplify]
{{x4 -> 1/3 (2 + 3 s), x5 -> 1/3 (-1 - 6 s)}}
\[ x_4 = \frac{2}{3} +s , \quad x_5 = -\frac{1}{3} - 2s, \quad x_6 = s, \qquad s\in \mathbb{R}. \] Therefore, we obtain a two-parameter family of right inverse matrices: \[ {\bf B}_{right}^{-1} = \begin{bmatrix} - \frac{5}{3} & \phantom{-}\frac{2}{3} \\ \phantom{-}\frac{4}{3} & -\frac{1}{3} \\ \phantom{-}0&\phantom{-}0 \end{bmatrix} + \begin{bmatrix} \phantom{-}t & \phantom{-}s \\ -2t & -2s \\ \phantom{-}t&\phantom{-}s \end{bmatrix} , \qquad s,t \in \mathbb{R} . \] We check that matrix \( \displaystyle \quad {\bf R} = \frac{1}{3} \begin{bmatrix} -5&\phantom{-}2 \\ \phantom{-}4& -1 \\ \phantom{-}0&\phantom{-}0 \end{bmatrix} \quad \) is also a right inverse:
Solve[{Subscript[x, 4] + 2*Subscript[x, 5] + 3*s == 0, 4*Subscript[x, 4] + 5*Subscript[x, 5] + 6*s == 1}, {Subscript[x, 4], Subscript[x, 5]}] // FullSimplify Flatten[Solve[ 4 Subscript[x, 4] + 5 Subscript[x, 5] + 6 Subscript[x, 6] == 1, Subscript[x, 6]] /. % // FullSimplify]
rr = (1/3)*{{-5, 2}, {4, -1}, {0, 0}};
B . rr
{{1, 0}, {0, 1}}
Also, matrix \( \displaystyle \quad {\bf C} = \begin{bmatrix} \phantom{-}t&\phantom{-}s \\ -2t & -2s \\ \phantom{-}t&\phantom{-}s \end{bmatrix} \quad \) annihilates matrix B from right, so B C = 0. So for any real values of s, t, matrix C belongs to right null space of B.
B . Transpose[c]
\( \displaystyle \begin{pmatrix} 0&0 \\ 0&0 \end{pmatrix} \)
End of Example 2

Matrices with full row rank have right inverses \( \displaystyle {\bf A}_{right}^{-1} , \) with \( \displaystyle {\bf A}\,{\bf A}_{right}^{-1} = {\bf I} . \) We can choose as a right inverse the n × m matrix AT(AAT)−1.

Theorem 3: An n × m matrix X is left inverse of an m × n matrix A if and only if its adjoint X* is right inverse of A*.

This statement is just a reformulation of two previous theorems showing their duality.
Example 3: We consider the 3 × 2 matrix (tall) \[ {\bf A} = \begin{bmatrix} 1&2 \\ 3&4 \\ 5&6 \end{bmatrix} . \] First, we check that A is a full rank matrix:
Clear[A, B, t, x, P];
A = {{1, 2}, {3, 4}, {5, 6}} ;
MatrixRank[A]
2
Then we build two symmetric matrices: \[ {\bf A}^{\mathrm T} {\bf A} = \begin{bmatrix} 35&44 \\ 44&56 \end{bmatrix} \qquad\mbox{and} \qquad {\bf A}\,{\bf A}^{\mathrm T} = \begin{bmatrix} 5&11&17 \\ 11&25&39 \\ 17&39&61 \end{bmatrix} . \]
A = {{1, 2}, {3, 4}, {5, 6}} ;
AtA = Transpose[A] . A
{{35, 44}, {44, 56}}
A = {{1, 2}, {3, 4}, {5, 6}} ;
AAt = A.Transpose[A]
{5, 11, 17}, {11, 25, 39}, {17, 39, 61}}
Matrix ATA is not singular, but AAT is not invertible, as Mathematica confirms
Det[AtA]
24
Det[AAt]
0
We find one left inverse matrix \( \displaystyle {\bf A}_{left}^{-1} = \left( {\bf A}^{\mathrm T} {\bf A} \right)^{-1} {\bf A}^{\mathrm T} : \) \[ {\bf A}_{left}^{-1} = \left( {\bf A}^{\mathrm T} {\bf A} \right)^{-1} {\bf A}^{\mathrm T} = \begin{bmatrix} -\frac{4}{3} & - \frac{1}{3} & \phantom{-}\frac{2}{3} \\ \phantom{-}\frac{13}{12} & \phantom{-}\frac{1}{3} & -\frac{5}{12} \end{bmatrix} . \]
left = Inverse[AtA] . Transpose[A]
{{-(4/3), -(1/3), 2/3}, {13/12, 1/3, -(5/12)}}
We check with Mathematica:
left . A
{{1, 0}, {0, 1}}
Tall matrix A has no right inverse matrix, but its transpose does. We use a duality rule and apply transposition to A and its left inverse: \[ {\bf A}^{\mathrm T} \left( \left( {\bf A}^{\mathrm T} {\bf A} \right)^{-1} {\bf A}^{\mathrm T} \right)^{\mathrm T} = {\bf I} \qquad \iff \qquad \begin{bmatrix} 1&3&5 \\ 2&4&6 \end{bmatrix} \,\begin{bmatrix} -\frac{4}{3} & \phantom{-}\frac{13}{12} \\ -\frac{1}{3} & \phantom{-}\frac{1}{3} \\ \phantom{-}\frac{2}{3}& - \frac{5}{12}\end{bmatrix} = \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix} . \]
Transpose[A] . Transpose[left]
{{1, 0}, {0, 1}}
We can slightly simplify the transpose to the left inverse: \[ \left( {\bf A}^{\mathrm T} \right)_{right}^{-1} = \left( \left( {\bf A}^{\mathrm T} {\bf A} \right)^{-1} {\bf A}^{\mathrm T} \right)^{\mathrm T} = {\bf A} \left( {\bf A}^{\mathrm T} {\bf A} \right)^{-1} . \]
A . Inverse[Transpose[A] . A] == Transpose[Inverse[Transpose[A] . A] . Transpose[A]]
True
End of Example 3

  1. Determine whether the following matrices are invertible \[ \mbox{(a)}\quad \begin{bmatrix} 1 & 2 & -3 \\ 0 & 5 & -1 \\ -1 & 2 & 2 \\ \end{bmatrix}, \qquad \mbox{(b)}\quad \begin{bmatrix} -3 & 3 & 2 \\ -1 & -2 & -1 \\ 0 & 2 & 1 \\ \end{bmatrix}, \qquad \mbox{(c)}\quad \begin{bmatrix} 2 & 1 & 3 \\ -2 & 1 & -2 \\ 3 & 2 & 5 \\ \end{bmatrix}, \qquad \mbox{(d)}\quad \begin{bmatrix} -2 & 1 & -1 \\ -3 & -3 & 3 \\ -1 & 3 & -3 \\ \end{bmatrix} . \]
  2. Use Gauss--Jordan elimination procedure to find the inverse of the matrix. \[ \mbox{(a)}\quad \begin{bmatrix} 1 & 2 & -3 \\ 0 & 5 & -1 \\ -1 & 2 & 2 \\ \end{bmatrix} , \qquad \mbox{(b)}\quad \begin{bmatrix} -3 & 3 & 2 \\ -1 & -2 & -1 \\ 0 & 2 & 1 \end{bmatrix} . \]
  3. Use the adjoint (adjugate) method to find the inverse of the matrix. \[ \mbox{(a)}\quad \begin{bmatrix} 3 & 3 & -1 \\ -2 & 0 & 3 \\ 2 & 1 & -2 \end{bmatrix} , \qquad \mbox{(b)}\quad \begin{bmatrix} -3 & -1 & -3 \\ 2 & 1 & 1 \\ 2 & 1 & 0 \end{bmatrix} . \]
  4. The n-by-n Hilbert matrix Hn = [hi,j] is given by \( \displaystyle h_{i,j} = \int_0^1 x^{i+j-2} {\text d} x = 1/(i + j − 1). \) Thus, \[ H_2 = \begin{bmatrix} 1 & \frac{1}{2} \\ \frac{1}{2} & \frac{1}{3} \end{bmatrix} , \qquad H_3 = \begin{bmatrix} 1 & \frac{1}{2} & \frac{1}{3} \\ \frac{1}{2} & \frac{1}{3} & \frac{1}{4} \\ \frac{1}{3} & \frac{1}{4} & \frac{1}{5} \end{bmatrix} . \] Find inverse matrices above (for n = 2 and 3).
  5. Let A = [𝑎i,j] be the n-by-n matrix with 𝑎i,j = min{i, j} for 1 ≤ i, jn. Find A−1.
  6. Prove the following identities for partitioned matrices.
    1. \( \displaystyle \begin{bmatrix} {\bf I} & {\bf B} \\ {\bf 0} & {\bf I} \end{bmatrix}^{-1} = \begin{bmatrix} {\bf I} & -{\bf B} \\ {\bf 0} & {\bf I} \end{bmatrix} \quad \) and \( \displaystyle \quad \begin{bmatrix} {\bf I} & {\bf 0} \\ {\bf C} & {\bf I} \end{bmatrix}^{-1} = \begin{bmatrix} {\bf I} & {\bf 0} \\ -{\bf C} & {\bf I} \end{bmatrix} . \)
    2. \( \displaystyle \begin{bmatrix} {\bf A} & {\bf 0} \\ {\bf 0} & \Lambda \end{bmatrix}^{-1} = \begin{bmatrix} {\bf A}^{-1} & {\bf 0} \\ {\bf 0} & \Lambda^{-1} \end{bmatrix} , \) where Λ is a diagonal matrix.
  7. Prove the identity for three invertible matrices: \[ \left( {\bf A}\,{\bf B}\,{\bf C} \right)^{-1} = {\bf C}^{-1} {\bf B}^{-1} {\bf A}^{-1} . \]

  1. Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International
  2. Beezer, R.A., A First Course in Linear Algebra, 2017.
  3. Dobrushkin, V.A., Applied Differential Equations. The Primary Course, second edition, CRC Press2022.
  4. Fadeev--LeVerrier algorithm, Wikipedia.
  5. Frame, J.S., A simple recursion formula for inverting a matrix, Bulletin of the American Mathematical Society, 1949, Vol. 55, p. 1045. doi:10.1090/S0002-9904-1949-09310-2
  6. Greenspan, D., Methods of matrix inversion, The American mathematical Monthly, 1955, Vol. 62, No. pp. 303--318.
  7. Karlsson, L., Computing explicit matrix inverses by recursion, Master's Thesis in Computing Science, 2006.
  8. Lightstone, A.H., Two methods of inverting matrices, Mathematics Magazine, 1968, Vol. 41, No. 1, pp. 1--7.