\[
{\bf C} = \begin{bmatrix}
C_{11} & C_{12} & \cdots & C_{1n} \\
C_{21} & C_{22} & \cdots & C_{2n} \\
\vdots & \vdost & \ddots & \vdots \\
C_{n1} & C_{n2} & \cdots & C_{nn}
\end{bmatrix}
Then the inverse of
is the transpose of the cofactor matrix times the reciprocal of the determinant of
\begin{equation} \label{EqDet.3}
{\bf A}^{-1} = \frac{1}{\det ({\bf A})} \, {\bf C}^{\textrm T} .
\end{equation}
The transpose of the cofactor matrix is called the
adjugate matrix of
A.
An
n ×
n square matrix
A is called
invertible if there exists an
n ×
n
matrix
B such that
\[
{\bf A}\, {\bf B} = {\bf B}\,{\bf A} = {\bf I} ,
\]
where
I is the identity matrix and the multiplication used is ordinary matrix multiplication. If this is the case, then the matrix
B is uniquely determined by
A and is called the inverse of
A, denoted by
\( {\bf A}^{-1} . \) If det(
A) ≠ 0, then matrix is invertible. A square matrix that is its own inverse, i.e.
\( {\bf A} = {\bf A}^{-1} \) and
\( {\bf A}^{2} = {\bf I}, \) is called an
involution or
involutory matrix.
We list the main properties of determinants:
1. \( \det ({\bf I} ) = 1 ,\) where I is the identity matrix (all entries are zeroes except diagonal terms, which all are ones).
2. \( \det \left( {\bf A}^{\mathrm T} \right) = \det \left( {\bf A} \right) . \)
3. \( \det \left( {\bf A}^{-1} \right) = 1/\det \left( {\bf A} \right) = \left( \det {\bf A} \right)^{-1} . \)
4. \( \det \left( {\bf A}\, \ast\, {\bf B} \right) = \det {\bf A} \, \det {\bf B} . \)
5. \( \det \left( c\,{\bf A} \right) = c^n \,\det \left( {\bf A} \right) \) for \( n\times n \) matrix
A and a scalar c.
6. If \( {\bf A} = [a_{i,j}] \) is a triangular matrix, i.e. \( a_{i,j} = 0 \) whenever i > j or, alternatively, whenever i < j, then its determinant equals the product of the diagonal entries:
\[
\det \left( {\bf A} \right) = a_{1,1} a_{2,2} \cdots a_{n,n} = \prod_{i=1}^n a_{i,i} .
\]
The
adjugate of a square matrix is the transpose of its cofactor matrix.
Example 1:
Consider the 3×3 matrix
\[
{\bf A} = \begin{bmatrix} 1&\phantom{-}4&3 \\ 2&-1&2 \\ 1&\phantom{-}2&2 \end{bmatrix}
\]
First, we check whether the matrix is not singular:
A = {{1, 4, 3}, {2, -1, 2}, {1, 2, 2}};
Det[A]
1
Then we use
Mathematica to find its inverse
Inverse[A]
{{-6, -2, 11}, {-2, -1, 4}, {5, 2, -9}}
\[
{\bf A}^{-1} = \begin{bmatrix} 1&\phantom{-}4&3 \\ 2&-1&2 \\ 1&\phantom{-}2&2 \end{bmatrix}^{-1} = \begin{bmatrix} -6&-2&11 \\ -2&-1&\phantom{-}4 \\ \phantom{-}5& \phantom{-}2&-9 \end{bmatrix} .
\]
Now we find its inverse manually. First, we calculate the minor matrix:
\[
{\bf M} = \begin{bmatrix} -6& \phantom{-}2& \phantom{-}5 \\ \phantom{-}2& -1&-2 \\ 11 & -4& -9
\end{bmatrix}
\]
because
\[
m_{11} = \begin{bmatrix} -1&2 \\ 2&2 \end{bmatrix} = -6, \quad m_{12} = \begin{bmatrix} 2&2 \\ 1&2 \end{bmatrix} =2, \quad m_{13} = \begin{bmatrix} 2&-1 \\ 1&2 \end{bmatrix} = 5, \quad m_{21} = \begin{bmatrix} 4&3 \\ 2&2 \end{bmatrix} = 2,
\]
and so on. Since the determinant of matrix
A is 1, its inverse is just transposed matrix
M, each element being multiplied by either 1 or −1.
▣
Example 2:
Consider the 2×2 matrix
\[
{\bf B} = \begin{bmatrix} \phantom{-}0&1 \\ -1&0 \end{bmatrix} ,
\]
which we enter into
Mathematica notebook as
B := {{0, 1}, {-1, 0}}
Out[1]= {{0, 1}, {-1, 0}}
Its inverse is
\( {\bf B}^{-1} = -{\bf B} =
\begin{bmatrix} 0&-1 \\ 1&\phantom{-}0 \end{bmatrix} . \)
Inverse[B]
Out[2]= {{0, -1}, {1, 0}}
Its second power
\( {\bf B}\,{\bf B} = {\bf B}^2 = -
\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = -{\bf I} =
\begin{bmatrix} -1 & \phantom{-}0 \\ \phantom{-}0 & -1 \end{bmatrix} \) is
the negative identity matrix. Next, we calculate its third power
\( {\bf B}\,{\bf B}\,{\bf B} = {\bf B}^3 = - {\bf B} ,
\) and finally the fourth power of the matrix
B,
which is the identity matrix:
\( {\bf B}^4 = {\bf I} . \)
B.B.B.B
Out[3]= {{1, 0}, {0, 1}}
▣
A matrix
A is called
singular if and only if its
determinant is zero. Otherwise, the matrix is nonsingular or
invertible (because an inverse matrix exists for such matrix).
The Cayley--Hamilton method for a 2 × 2 matrix gives
\[
{\bf A}^{-1} = \frac{1}{\det {\bf A}} \left[ \left( \mbox{tr} {\bf A} \right) {\bf I} - {\bf A} \right] .
\]
We list some basic properties of the inverse operation:
1. \( \left( {\bf A}^{-1} \right)^{-1} = {\bf A} . \)
2. \( \left( c\,{\bf A} \right)^{-1} = c^{-1} \,{\bf A}^{-1} \) for nonzero scalar c.
3. \( \left( {\bf A}^{\mathrm T} \right)^{-1} = \left( {\bf A}^{-1} \right)^{\mathrm T} . \)
4. \( \left( {\bf A}\, {\bf B} \right)^{-1} = {\bf B}^{-1} {\bf A}^{-1} . \)
Theorem:
For a square matrix A, the homogeneous equation A x = 0 has a nontrivial solution (meaning not zero) if and only if the matrix A is singular, so its determinant must be zero.
If
A is an invertible square matrix (its determinant is not zero), then we can multiply both sides of the equation
A x =
0 by the inverse matrix
A−1 to obtain
\[
{\bf A}^{-1} {\bf A}\,{\bf x} = {\bf x} = {\bf A}^{-1} {\bf 0} \qquad \Longrightarrow \qquad {\bf x} = {\bf 0} .
\]
A square matrix whose transpose is equal to its inverse is called an
orthogonal matrix;
that is,
A is orthogonal if
\( {\bf A}^{\mathrm T} = {\bf A}^{-1} . \)
Example 3:
In three-dimensional space, consider the rotational matrix
\[
{\bf A} = \begin{bmatrix}
\cos\theta & 0& -\sin\theta \\
0&1& 0 \\
\sin\theta & 0& \phantom{-}\cos\theta \end{bmatrix} ,
\]
where θ is any angle. This matrix describes a rotation in the (
x1,
x3) plane in ℝ³. WE ch3eck its properties with
Mathematica:
A = {{Cos[theta], 0,-Sin[theta]}, {0,1,0},{Sin[theta],0,Cos[theta]}};
Simplify[Inverse[A]]
{{Cos[theta], 0, Sin[theta]}, {0, 1, 0}, {-Sin[theta], 0, Cos[theta]}}
Simplify[Transpose[A] - Inverse[A]]
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
\[
{\bf A}^{-1} = {\bf A}^{\textrm T} = \begin{bmatrix}
\phantom{-}\cos\theta & 0& \sin\theta \\
0&1& 0 \\
-\sin\theta & 0 & \cos\theta \end{bmatrix} ,
\]
For any unit vector
v, we define the
reflection matrix:
\[
{\bf R} = {\bf I} - 2 {\bf v}\,{\bf v}^{\textrm T} .
\]
Upon choosing
v = [1, 2, −2]/3, we get
\[
{\bf v}\,{\bf v}^{\textrm T} = \frac{1}{9} \begin{bmatrix}
\phantom{-}1& \phantom{-}2&-2 \\ \phantom{-}2& \phantom{-}4 &-4 \\ -2&-2& 4 \end{bmatrix} \qquad \Longrightarrow \qquad {\bf R} = \frac{1}{9} \begin{bmatrix}
\phantom{-}7&-4&4 \\ -4&\phantom{-}1& 8 \\ \phantom{-}4&\phantom{-}8& 1
\end{bmatrix} .
\]
Its inverse is
\[
{\bf R}^{-1} = {\bf R}^{\textrm T} = \frac{1}{9} \begin{bmatrix}
\phantom{-}7& -4& \phantom{-}4 \\ -4& \phantom{-}1 &\phantom{-}8 \\ \phantom{-}4& \phantom{-}8& \phantom{-}1 \end{bmatrix} .
\]
R = {{7, -4, 4}, {-4, 1, 8}, {4, 8, 1}}/9 ;
Inverse[R]
{{7/9, -(4/9), 4/9}, {-(4/9), 1/9, 8/9}, {4/9, 8/9, 1/9}}
Inverse[R] - Transpose[R]
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
■
-
Bibliography for the Inverse Matrix
-
Axler, S., Linear Algebra Done Right, Springer; NY, 2014.
-
Hannah, John, A geometrical approach to determinants, The American mathematical Monthly, 1996, Vol. 103, No. 5, pp. 401--409.
-
-
Yandl, A.L. and Swenson, C., A class of matrices with zero determinant, Mathematics Magazine, 2012, Vol. 85, Issue 2, pp. 126--130. https://doi.org/10.4169/math.mag.85.2.126