Matrix Norms

The set ℳm,n of all m × n matrices under the field of either real or complex numbers is a vector space of dimension m  · n. In order to determine how close two matrices are, and in order to define the convergence of sequences of matrices, a special concept of matrix norm is employed, with notation \( \| {\bf A} \| . \) A norm is a function from a real or complex vector space to the nonnegative real numbers that satisfies the following conditions:

  • Positivity:     ‖A‖ ≥ 0     ‖A‖ = 0 iff A = 0.
  • Homogeneity:     ‖kA‖ = |k| ‖A‖ for arbitrary scalar k.
  • Triangle inequality:     ‖A + B‖ ≤ ‖A‖ + ‖B‖.
Since the set of all matrices admits the operation of multiplication in addition to the basic operation of addition (which is included in the definition of vector spaces), it is natural to require that matrix norm satisfies the special property:
  • A · B‖ ≤ ‖A‖ · ‖B‖.
Once a norm is defined, it is the most natural way of measure distance between two matrices A and B as d(A, B) = ‖AB‖ = ‖BA‖. However, not all distance functions have a corresponding norm. For example, a trivial distance that has no equivalent norm is d(A, A) = 0 and d(A, B) = 1 if AB. The norm of a matrix may be thought of as its magnitude or length because it is a nonnegative number. Their definitions are summarized below for an \( m \times n \) matrix A, to which corresponds a self-adjoint (m+n)×(m+n) matrix B:

\[ {\bf A} = \left[ \begin{array}{cccc} a_{1,1} & a_{1,2} & \cdots & a_{1,n} \\ a_{2,1} & a_{2,2} & \cdots & a_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m,1} & a_{m,2} & \cdots & a_{m,n} \end{array} \right] \qquad \Longrightarrow \qquad {\bf B} = \begin{bmatrix} {\bf 0} & {\bf A}^{\ast} \\ {\bf A} & {\bf 0} \end{bmatrix} . \]
Here A* denotes the adjoint matrix: \( {\bf A}^{\ast} = \overline{{\bf A}^{\mathrm T}} = \overline{\bf A}^{\mathrm T} . \)
For a rectangular m-by-n matrix A and given norms \( \| \ \| \) in \( \mathbb{R}^n \mbox{ and } \mathbb{R}^m , \) the norm of A is defined as follows:
\begin{equation} \label{EqBasic.2} \| {\bf A} \| = \sup_{{\bf x} \ne {\bf 0}} \ \dfrac{\| {\bf A}\,{\bf x} \|_m}{\| {\bf x} \|_n} = \sup_{\| {\bf x} \| = 1} \ \| {\bf A}\,{\bf x} \| . \end{equation}
This matrix norm is called the operator norm or induced norm.
The term "induced" refers to the fact that the definition of a norm for vectors such as A x and x is what enables the definition above of a matrix norm. This definition of matrix norm is not computationally friendly, so we use other options. The most important norms are as follow

The operator norm corresponding to the p-norm for vectors, p ≥ 1, is:

\begin{equation} \label{EqBasic.3} \| {\bf A} \|_{p,q} = \sup_{{\bf x} \ne 0} \, \frac{\| {\bf A}\,{\bf x} \|_q}{\| {\bf x} \|_p} = \sup_{\| {\bf x} \|_p =1} \, \| {\bf A}\,{\bf x} \|_q , \end{equation}
where \( \| {\bf x} \|_p = \left( x_1^p + x_2^p + \cdots + x_n^p \right)^{1/p} .\)

1-norm (is commonly known as the maximum column sum norm) of a matrix A may be computed as

\begin{equation} \label{EqBasic.4} \| {\bf A} \|_1 = \max_{1 \le j \le n} \,\sum_{i=1}^n | a_{i,j} | . \end{equation}

The infinity norm, \( \infty - \) norm of matrix A may be computed as

\begin{equation} \label{EqBasic.5} \| {\bf A} \|_{\infty} = \| {\bf A}^{\ast} \|_{1} = \max_{1 \le i \le n} \,\sum_{j=1}^n | a_{i,j} | , \end{equation}
which is simply the maximum absolute row sum of the matrix.

In the special case of p = 2 we get the Euclidean norm (which is equal to the largest singular value of a matrix)

\begin{equation} \label{EqBasic.6} \| {\bf A} \|_2 = \sup_{\bf x} \left\{ \| {\bf A}\, {\bf x} \|_2 \, : \quad \mbox{with} \quad \| {\bf x} \|_2 =1 \right\} = \sigma_{\max} \left( {\bf A} \right) = \sqrt{\rho \left( {\bf A}^{\ast} {\bf A} \right)} , \end{equation}
where σmax(A) represents the largest singular value of matrix A.

The Frobenius norm (non-induced norm):

\begin{equation} \label{EqBasic.7} \| {\bf A} \|_F = \left( \sum_{i=1}^m \sum_{j=1}^n |a_{i.j} |^2 \right)^{1/2} = \left( \mbox{tr}\, {\bf A} \,{\bf A}^{\ast} \right)^{1/2} = \left( \mbox{tr}\, {\bf A}^{\ast} {\bf A} \right)^{1/2} . \end{equation}
The Euclidean norm and the Frobenius norm are related via the inequality:
\[ \| {\bf A} \|_2 = \sigma_{\max}\left( {\bf A} \right) \le \| {\bf A} \|_F = \left( \sum_{i=1}^m \sum_{j=1}^n |a_{i.j} |^2 \right)^{1/2} = \left( \mbox{tr}\, {\bf A} \,{\bf A}^{\ast} \right)^{1/2} . \]

There is also another function that that provides infinum of all norms of a square matrix: \( \rho ({\bf A}) \le \|{\bf A}\| . \)

The spectral radius of a square matrix A is
\begin{equation} \label{EqBasic.8} \rho ({\bf A}) = \lim_{k\to \infty} \| {\bf A}^k \|^{1/k} = \max \left\{ |\lambda | : \ \lambda \mbox{ is eigenvalue of }\ {\bf A} \right\} . \end{equation}
Theorem 2: For any matrix norm ‖·‖ on the space of square matrices and for any square matrix A, we have
\[ \rho\left( {\bf A} \right) \le \| {\bf A} \| . \]
For any positive integer k, we have
\begin{equation} \label{EqBasic.9} \rho ({\bf A}) \le \| {\bf A}^k \|^{1/k} . \end{equation}

Some properties of the matrix norms are presented in the following

Theorem 3: Let A and B be \( m \times n \) matrices and let \( k \) be a scalar.

  • \( \| {\bf A} \| \ge 0 \) for any square matrix A.
  • \( \| {\bf A} \| =0 \) if and only if the matrix A is zero: \( {\bf A} = {\bf 0}. \)
  • \( \| k\,{\bf A} \| = |k| \, \| {\bf A} \| \) for any scalar \( k. \)
  • \( \| {\bf A} + {\bf B}\| \le \| {\bf A} \| + \| {\bf B} \| .\)
  • \( \left\vert \| {\bf A} - {\bf B}\|\right\vert \le \| {\bf A} - {\bf B} \| .\)
  • \( \| {\bf A} \, {\bf B}\| \le \| {\bf A} \| \, \| {\bf B} \| . \)

All these norms are equivalent and we present some inequalities:

\[ \| {\bf A} \|_2^2 \le \| {\bf A}^{\ast} \|_{\infty} \cdot \| {\bf A} \|_{\infty} = \| {\bf A} \|_{1} \cdot \| {\bf A} \|_{\infty} , \]
where A* is the adjoint matrix to A (transposed and complex conjugate).

Theorem 4: Let ‖ ‖ be any matrix norm and let B be a matrix such that  ‖B‖ < 1. Then matrix I + B is invertible and
\[ \| \left( {\bf I} + {\bf B} \right)^{-1} \| \le \frac{1}{1 - \| {\bf B} \|} . \]
Theorem 5: Let ‖ ‖ be any matrix norm, and let matrix I + B is singular, where I is the identity matrix. Then ‖B‖ ≥ 1 for every matrix norm.

Mathematica has a special command for evaluating norms:
Norm[A] = Norm[A,2] for evaluating the Euclidean norm of the matrix A;
Norm[A,1] for evaluating the 1-norm;
Norm[A, Infinity] for evaluating the ∞-norm;
Norm[A, "Frobenius"] for evaluating the Frobenius norm.

A = {{1, 2, 3}, {4, 5, 6}, {7, 8, 9}}
Norm[A]
Sqrt[3/2 (95 + Sqrt[8881])]
N[%]
16.8481

 

Example 3: Evaluate the norms of the matrix \( {\bf A} = \left[ \begin{array}{cc} \phantom{-}1 & -7 & 4 \\ -2 & -3 & 1\end{array} \right] . \)

The absolute column sums of A are \( 1 + | -2 | =3 \) , \( |-7| + | -3 | =10 , \) and \( 4+1 =5 . \) The larger of these is 10 and therefore \( \| {\bf A} \|_1 = 10 . \)

Norm[A, 1]
10

The absolute row sums of A are \( 1 + | -7 | + 4 =12 \) and \( | -2 | + |-3| + 1 = 6 ; \) therefore, \( \| {\bf A} \|_{\infty} = 12 . \)

Norm[Transpose[A], 1]
12

The Euclidean norm of A is the largest singular value. So we calculate

\[ {\bf S} = {\bf A}^{\ast} {\bf A} = \begin{bmatrix} 5&-1&2 \\ -1&58&-31 \\ 2&-31&17 \end{bmatrix} , \qquad \mbox{tr} \left( {\bf S} \right) = 80. \]
Its eigenvalues are
Eigenvalues[Transpose[A].A]
{40 + Sqrt[1205], 40 - Sqrt[1205], 0}
Taking the square root of the largest one, we obtain the Euclidean norm of matrix A:
N[Sqrt[40 + Sqrt[1205]]]
8.64367
Mathematica also knows how to find the Euclidean norm:
Norm[A, 2]
Sqrt[40 + Sqrt[1205]]
We compare it with the Frobenius norm:
Norm[A, "Frobenius"]
4 Sqrt[5]
N[%]
8.94427
Norm[A]
Sqrt[40 + Sqrt[1205]]
N[%]
8.64367
To find its exact value, we evaluate the product
\[ {\bf M} = {\bf A}\,{\bf A}^{\ast} = \left[ \begin{array}{cc} \phantom{-}1 & -7 & 4\\ -2 & -3 & 1 \end{array} \right] \, \left[ \begin{array}{cc} 1 & -2 \\ -7 & -3 \\ 4&-1 \end{array} \right] = \left[ \begin{array}{cc} 66 & 23 \\ 23& 14 \end{array} \right] , \qquad \mbox{tr} \left( {\bf M} \right) = 80. \]

This matrix\( {\bf A}\,{\bf A}^{\ast} \) has two eigenvalues \( 40 \pm \sqrt{1205} . \) Hence, the Euclidean norm of the matrix A is \( \sqrt{40 + \sqrt{1205}} \approx 8.64367 . \)

Therefore,
\[ \| {\bf A} \|_2 = 8.64367 < \| {\bf A} \|_F = 8.94427 < \| {\bf A} \|_1 = 10 < \| {\bf A} \|_{\infty} = 12 . \]
   ■
Example 4: Let us consider the matrix
\[ {\bf A} = \begin{bmatrix} 1&2&3 \\ 4&5&6 \\ 7&8&9 \end{bmatrix} . \]
Its conjugate transpose (adjoint) matrix is
\[ {\bf A}^{\ast} = \begin{bmatrix} 1&2&3 \\ 4&5&6 \\ 7&8&9 \end{bmatrix}^{\mathrm T} = \begin{bmatrix} 1&4&7 \\ 2&5&8 \\ 3&6&9 \end{bmatrix} . \]
So
\[ {\bf S} = {\bf A}^{\ast} {\bf A} = \begin{bmatrix} 66&78&90 \\ 78&93&108 \\ 90&108&126 \end{bmatrix} \]
We check with Mathematica:
A = {{1, 2, 3}, {4, 5, 6}, {7, 8, 9}}
S = Transpose[A].A
Their eigenvalues are
Eigenvalues[A]
{3/2 (5 + Sqrt[33]), 3/2 (5 - Sqrt[33]), 0}
Eigenvalues[S]
{3/2 (95 + Sqrt[8881]), 3/2 (95 - Sqrt[8881]), 0}
N[%]
{283.859, 1.14141, 0.}
Therefore, the largest singular number of A is
\[ \sigma = \sqrt{\frac{3}{2} \left( 95 + \sqrt{8881} \right)} \approx 16.8481. \]
We also check the opposite product
Eigenvalues[A.Transpose[A]]
{3/2 (95 + Sqrt[8881]), 3/2 (95 - Sqrt[8881]), 0}
\[ {\bf M} = {\bf A}\, {\bf A}^{\ast} = \begin{bmatrix} 14&32&50 \\ 32&77&122 \\ 50&122&194 \end{bmatrix} \]
These matrices S and M have the same eigenvalues. Therefore, we found the Euclidean (operator) norm of A to be approximately 16.8481. Mathematica knows this norm:
Norm[A]
Sqrt[3/2 (95 + Sqrt[8881])]
The spectral radius of A is the largest eigenvalue:
\[ \rho ({\bf A}) = \frac{3}{2} \left( 5 + \sqrt{33} \right) \approx 16.1168 , \]
which is slightly less than its operator (Euclidean) norm.

The Frobenius norm of matrix \( {\bf A} = \begin{bmatrix} 1&2&3 \\ 4&5&6 \\ 7&8&9 \end{bmatrix} \) is

\[ \| {\bf A} \|_F = \left( \sum_{i=1}^m \sum_{j=1}^n |a_{i.j} |^2 \right)^{1/2} = \left( 1^2 + 2^2 + 3^2 + 4^2 + 5^2 + 6^2 + 7^2 +8^2 +9^2 \right)^{1/2} = \sqrt{285} = \left( \mbox{tr}\, {\bf A} \,{\bf A}^{\ast} \right)^{1/2} . \]
A = {{1, 2, 3}, {4, 5, 6}, {7, 8, 9}}
Tr[A.Transpose[A]]
285
Sum[k^2, {k, 1, 9}]
285
N[Sqrt[285]]
16.8819
Mathematica has a dedicated command:
Norm[A, "Frobenius"]
16.8819

To find 1-norm of A, we add elements in every column; it turns out that the last column has the largest entries, so

\[ \| {\bf A} \|_1 = 3+6+9=18. \]
If we add entries in every row, then the last row contains the largest values and we get
\[ \| {\bf A} \|_{\infty} = 7+8+9=24. \]
   ■
http://www.math.chalmers.se/~larisa/www/NumLinAlg/Lecture3_2019.pdf

The set ℳm,n of all m × n matrices under the field of either real or complex numbers is a vector space of dimension m  · n. In order to determine how close two matrices are, or to define the convergence of sequences of matrices, a special concept of matrix norm is employed, with notation \( \| {\bf A} \| . \) A norm is a function from a real or complex vector space to the nonnegative real numbers that satisfies the following conditions:

  • Positivity:     ‖A‖ ≥ 0,     ‖A‖ = 0 iff A = 0.
  • Homogeneity:     ‖kA‖ = |k| ‖A‖ for arbitrary scalar k.
  • Triangle inequality:     ‖A + B‖ ≤ ‖A‖ + ‖B‖.
The norm of a matrix may be thought of as its magnitude or length because it is a nonnegative number. There are known three kinds of matrix norms:
  • The operator norms are norms induced by a matrix considered as a linear operator from ℝm into ℝn for real scalars or ℂm into ℂn for complex scalars.
  • The entrywise norms treat an m-by-n matrix as the vector of length m · n. Therefore, these norms are directly related to norms in a vector space.
  • The Schatten norms are based on the singular values σi or eigenvalues of the given matrix.
The norm notation ‖ · ‖ is heavily overloaded, obliging readers to disambiguate norms by paying close attention to the linguistic type and context of each norm’s argument. Therefore, the main norm notation ‖ · ‖ is subject to some indices, depending on author's preference. Moreover, the reader should be aware that different kinds of norms may lead to the same definition (for example, the Euclidean norm is actually the spectral norm in Schatten's sense).
For a rectangular m-by-n matrix A and given norms \( \| \ \| \) in \( \mathbb{R}^n \mbox{ and } \mathbb{R}^m , \) the norm of A is defined as follows:
\begin{equation} \label{EqMatrix.3} \| {\bf A} \| = \sup_{{\bf x} \ne {\bf 0}} \ \dfrac{\| {\bf A}\,{\bf x} \|_m}{\| {\bf x} \|_n} = \sup_{\| {\bf x} \| = 1} \ \| {\bf A}\,{\bf x} \| . \end{equation}
This matrix norm is called the operator norm or induced norm.

The term "induced" refers to the fact that the definition of a norm for vectors such as A x and x is what enables the definition above of a matrix norm. This definition of matrix norm is not computationally friendly, so we use other options. The following popular norms are listed below.

For a rectangular m×n matrix A, there are known the following operator norms.
  • A‖₁ is the maximum absolute column sum of the matrix:
    \begin{equation} \label{EqMatrix.4} \| {\bf A} \|_1 = \max_{1 \le j \le n} \ \sum_{i=1}^m |a_{ij}| . \end{equation}
  • A‖₂ is the Euclidean norm, the greatest singular value of A, which is the square root of the greatest eigenvalue of \( {\bf A}^{\ast} {\bf A} , \) i.e., its spectral radius
    \begin{equation} \label{EqMatrix.5} \| {\bf A} \|_2 = \sigma_{\max} \left( {\bf A} \right) = \sqrt{\lambda_{\max} \left( {\bf A}^{\ast} {\bf A} \right)} \end{equation}
    where \( \lambda_{\max} \left( {\bf A}^{\ast} {\bf A} \right) \) is is the maximal eigenvalue of \( {\bf A}^{\ast} {\bf A} , \) also is known as the spectral radius, and \( \sigma_{\max} \left( {\bf A} \right) \) is the maximal singular value of A.
  • A is maximum absolute row sum:
    \begin{equation} \label{EqMatrix.6} \| {\bf A} \|_{\infty} = \| {\bf A}^{\ast} \|_{1} = \max_{1 \le i \le m} \, \sum_{j=1}^n |a_{ij} | . \end{equation}
Mathematica has dedicated commands for evaluating the operator norms:
  • Norm[A, 1] for evaluating ‖ · ‖₁;
  • Norm[A] = Norm[A, 2] for evaluating the Euclidean norm;
  • Norm[A, Infinity] for evaluating ‖ · ‖.
Note that to a rectangular m×n matrix A corresponds a self-adjoint square (m+n)×(m+n) matrix
\[ {\bf B} = \begin{bmatrix} {\bf 0} & {\bf A}^{\ast} \\ {\bf A} & {\bf 0} \end{bmatrix} . \]

Theorem 1: For arbitrary square n × n matrices A and B, any norm ‖ · ‖, and a scalar k , we have

\[ \begin{split} \| {\bf A}\,{\bf x} \| &\le \| {\bf A} \| \,\| {\bf x} \| , \\ \| {\bf A}\,{\bf B} \| &\le \| {\bf A} \|\,\|{\bf B} \| , \\ \| {\bf A} + {\bf B} \| &\le \| {\bf A}\| + \| {\bf B} \| , \\ \left\vert \| {\bf A} - {\bf B}\|\right\vert &\le \| {\bf A} - {\bf B} \| , \\ \| k\,{\bf A} \| &= |k|\, \| {\bf A} \| . \end{split} \]

The induced matrix norms constitute a large and important part of possible matrix norms---there are known many non-induced norms. The following very important non-induced norm is called after Ferdinand Georg Frobenius (1849--1917).

The Frobenius norm \( \| \cdot \|_F : \mathbb{C}^{m\times n} \to \mathbb{R}_{+} \) is defined for a rectangular m-by-n matrix A by
\begin{equation} \label{EqMatrix.7} \| {\bf A} \|_F = \left( \sum_{i=1}^m \,\sum_{j=1}^n |a_{ij} |^2 \right)^{1/2} = \left( \mbox{tr}\, {\bf A} \,{\bf A}^{\ast} \right)^{1/2} = \left( \mbox{tr}\, {\bf A}^{\ast} {\bf A} \right)^{1/2} , \end{equation}
where A* is the adjoint matrix to A. Recall that the trace function returns the sum of diagonal entries of a square matrix.
Mathematica has a dedicated command for evaluating the Frobenius norm:
Norm[A, "Frobenius"]

One can think of the Frobenius norm as taking the columns of the matrix, stacking them on top of each other to create a vector of size m×n, and then taking the vector 2-norm of the result. The Frobenius norm is the matrix norm that is unitary invariant, i.e., it is conserved or invariant under a unitary transformation (such as a rotation). For a norm to be unitary invariant, it should depend solely upon the singular values of the matrix. So if B = R*A R with a unitary (orthogonal if real) matrix R satisfying R* = R-1, then

\[ \| {\bf B} \|_F^2 = \mbox{tr} \left( {\bf B}^{\ast} {\bf B} \right) = \mbox{tr} \left[ \left( {\bf R}^{\ast} {\bf A\,R} \right)^{\ast} \left( {\bf R}^{\ast} {\bf A\,R} \right) \right] = \mbox{tr} \left( \right) = \mbox{tr} \left( {\bf A}^{\ast} {\bf A} \right) = \| {\bf A} \|_F^2 . \]

There is also another function that that provides infinum of all norms of a square matrix: \( \rho ({\bf A}) \le \|{\bf A}\| . \)

The spectral radius of a square matrix A is
\begin{equation} \label{EqBasic.8} \rho ({\bf A}) = \lim_{k\to \infty} \| {\bf A}^k \|^{1/k} = \max \left\{ |\lambda | : \ \lambda \mbox{ is eigenvalue of }\ {\bf A} \right\} . \end{equation}
Besides the Frobenius norm, there are known other non-induced norms that treat an m-by-n matrix as the vector of length m · n. For example, the following "entrywise" norms are also widely used. There is this unfortunate but unavoidable overuse of the notation.

  • \( \| {\bf A} \|_1 \) is the absolute sum of all elements of A:
    \begin{equation} \label{EqMatrix.7a} \| {\bf A} \|_1 = \sum_{i=1}^m \,\sum_{j=1}^n |a_{ij} | . \end{equation}
  • \( \| {\bf A} \|_{\max} \) is the maximum norm, the maximum absolute value among all mn elements of A:
    \begin{equation} \label{EqMatrix.9a} \| {\bf A} \|_{\max} = \max_{i,j} \,|a_{ij} | . \end{equation}

Robert Schatten (1911--1977) suggested to define a matrix norm based on the singular values σi or eigenvalues. Therefore, these norms are called after him. We present three of them, with overlapping previously used notations. We denote by r the rank of the rectangular m×n matrix A.

  • \( \| {\bf A} \|_1 \) is the trace norm:
    \[ \| {\bf A} \|_1 = \sum_{i=1}^r \sigma_i = \sum_{i=1}^r \sqrt{\lambda_i} \left[ = \mbox{trace} \left( \sqrt{{\bf A}^{\ast} {\bf A}} \right) \right] . \]
  • The last formula is for disposition because a square root of a matrix may not exit, but when it exists, it is not unique.
  • \( \| {\bf A} \|_2 = \| {\bf A} \|_F \) is the Frobenius norm.
  • \[ \| {\bf A} \|_2 = \sqrt{ \sum_{i=1}^r \sigma_i^2 } = \sqrt{ \sum_{i=1}^r \lambda_i } = \| {\bf A} \|_F . \]
  • \( \| {\bf A} \|_{\infty} \) is the spectral norm (the spectral radius of A*A):
    \[ \| {\bf A} \|_{\infty} = \max \left\{ \sigma_1 , \ldots , \sigma_n \right\} = \sigma_{\max} \left( {\bf A} \right) = \sqrt{\lambda_{\max} \left( {\bf A}^{\ast} {\bf A} \right)} . \]

Theorem 2: For m×n matrix A of rank r, the following inequalities hold

  • \[ \| {\bf A} \|_2^2 \le \| {\bf A} \|_1 \| {\bf A} \|_{\infty} . \]
  • \[ \| {\bf A} \|_2 \le \| {\bf A} \|_F \le \sqrt{r} \, \| {\bf A} \|_{2} . \]
  • \[ \| {\bf A} \|_{\max} \le \| {\bf A} \|_2 \le \sqrt{mn} \, \| {\bf A} \|_{\max} .\]
  • \[ \frac{1}{\sqrt{n}} \,\| {\bf A} \|_{\infty} \le \| {\bf A} \|_2 \le \sqrt{m} \, \| {\bf A} \|_{\infty} . \]
  • \[ \frac{1}{\sqrt{m}} \,\| {\bf A} \|_{1} \le \| {\bf A} \|_2 \le \sqrt{n} \, \| {\bf A} \|_{1} . \]


Example 5: Consider the matrix from the previous example:
\[ {\bf A} = \begin{bmatrix} 4&\phantom{-}1&4&\phantom{-}3 \\ 0&-1&3&\phantom{-}2 \\ 1&\phantom{-}5&4&-1 \end{bmatrix} . \]
Summing down the columns of A, we find that
\begin{align*} \| {\bf A} \|_{1} &= \max_{1 \le j \le 4}\, \sum_{i=1}^3 |a_{ij}| \\ &= \max \left\{ 4+0+1, \ 1+1+5 , \ 4+3+4 , \ 3+2+1 \right\} \\ &= \max \left\{ 5, 7, 11, 6 \right\} = 11 . \end{align*}
This answer is checked with Mathematica:
A = {{4, 1, 4, 3}, {0, -1, 3, 2}, {1, 5, 4, -1}};
Norm[A, 1]
11
Now we find its infinity-norm. Summing along the rows of A, we find that
\begin{align*} \| {\bf A} \|_{\infty} &= \max_{1 \le i \le 3}\, \sum_{i=1}^3 |a_{ij}| \\ &= \max \left\{ 4+1+4+3, 1+3+2, 1+5+4+1 \right\} = \max \left\{ 12, 6, 11 \right\} \\ &= 12 \end{align*}
Norm[A, Infinity]
12
The same answer is obtained when transposed matrix is used:
A = {{4, 1, 4, 3}, {0, -1, 3, 2}, {1, 5, 4, -1}};
Norm[Transpose[A], 1]
12
For the Euclidean norm, we need to calculate the eigenvalues of the products A*A and AA*:
A4 = Transpose[A].A
A3 = A.Transpose[A]
\[ {\bf A}_4 = {\bf A}^{\ast} {\bf A} = \begin{bmatrix} 17&9&20&11 \\ 9&27&21&-4 \\ 20&21&41&14 \\ 11&-4&14&14 \end{bmatrix} , \qquad {\bf A}_3 = {\bf A}\,{\bf A}^{\ast} = \begin{bmatrix} 42&17 & 22 \\ 17&14& 5 \\ 11& 5 & 43 \end{bmatrix} . \]
Both matrices A₃ and A₄ have the same trace (sum of diagonal entries) 99, and the same largest eigenvalue:
Max[N[Eigenvalues[A4]]]
Max[N[Eigenvalues[A3]]]
68.9783
Take the square root, we obtain the Euclidean norm:
\[ \| {\bf A} \|_2 = \sigma_{\max} \left( {\bf A} \right) = \sqrt{\lambda_{\max} \left( {\bf A}^{\ast} {\bf A} \right)} \approx 8.30532 . \]
euclid = N[Sqrt[Max[Eigenvalues[A3]]]]
8.30532017177966
We double check our answer with Mathematica by evaluating the maximum singular value first,
N[Max[SingularValueList[A]]]
8.30532
and then by finding the norm of matrix A:
N[Norm[A]]
8.30532

 

Now we turn to non-induced norms. To determine the Frobenius norm, we use its definition.
square = 0;
For[i = 1, i <= 3, i++, For[j = 1, j <= 4, j++, square = square + A[[i, j]]^2]]
Then we extract the square root:
Sqrt[square]
3 Sqrt[11]
N[%]
9.9498743710662
To calculate the Frobenius norm, we again ask Mathematica.
Norm[A, "Frobenius"]
3 Sqrt[11]
N[%]
9.94987

To determine the "entrywise" 1-norm, we add absolute values of all matrix entries:

\[ \| {\bf A} \|_1 = \sum_{i,j} \left\vert a_{i,j} \right\vert = 4+1+4+2 +1+3+2+1+5+4+1 = 12+6+11 = 29 . \]
Sum[ Sum[Abs[A[[i, j]]], {j, 1, 4}], {i, 1, 3}]
29
Its maximum norm is
\[ \| {\bf A} \|_{\max} = \max_{i,j} \left\vert a_{i,j} \right\vert = 5 . \]
We find the spectral norm with the aid of Mathematica:
N[SingularValueList[A]]
{8.30532, 4.99188, 2.25894}
Therefore, the spectral norm is 8.30532. To check the answer, we calculate the self-adjoint matrix
\[ {\bf A}^{\ast} {\bf A} = \begin{bmatrix} 17&9&20&11 \\ 9&27&21&-4 \\ 20&21&41&14 \\ 11&-4&14&14 \end{bmatrix} = {\bf A}_4 . \]
Its eigenvalues are obtained with Mathematica:
A4 = Transpose[A].A
{{17, 9, 20, 11}, {9, 27, 21, -4}, {20, 21, 41, 14}, {11, -4, 14, 14}}
N[Eigenvalues[A4]]
{68.9783, 24.9189, 5.1028, 0.}
Sqrt[%]
{8.30532, 4.99188, 2.25894, 0.}

 

Now we calculate the Schatten norms. We start with the first one
\[ \| {\bf A} \|_1 = \sum_{i=1}^r \sigma_i = \sum_{i=1}^r \sqrt{\lambda_i} \left[ = \mbox{trace} \left( \sqrt{{\bf A}^{\ast} {\bf A}} \right) \right] \approx 15.5561 . \]
We verify this value with Mathemnatica
N[Sum[SingularValueList[A][[i]], {i, 1, 3}]]
15.556136510657433
Again, we double check this answer with two approaches. First, we calculate the sum of eigenvalues of matrices A₃ and A₄:
N[Sum[Sqrt[Eigenvalues[A4][[i]]], {i, 1, 4}]]
15.556136510657433
and then repeat with matrix A₃:
N[Sum[Sqrt[Eigenvalues[A3][[i]]], {i, 1, 3}]]
15.556136510657433
Since the sum of eigenvalues of a square matrix is its trace, we find first of of the roots of matrix A₃. To determine its square root, we use Sylvester's method:
\[ \sqrt{{\bf A}_3} = \sqrt{{\bf A}\,{\bf A}^{\ast}} = \frac{\left( {\bf A}_3 - \lambda_2 {\bf I} \right) \left( {\bf A}_3 - \lambda_3 {\bf I} \right)}{\left( \lambda_1 - \lambda_2 \right)\left( \lambda_1 - \lambda_3 \right)} + \frac{\left( {\bf A}_3 - \lambda_1 {\bf I} \right) \left( {\bf A}_3 - \lambda_3 {\bf I} \right)}{\left( \lambda_2 - \lambda_1 \right)\left( \lambda_2 - \lambda_3 \right)} + \frac{\left( {\bf A}_3 - \lambda_2 {\bf I} \right) \left( {\bf A}_3 - \lambda_1 {\bf I} \right)}{\left( \lambda_3 - \lambda_1 \right)\left( \lambda_3 - \lambda_1 \right)} . \]
Then find its eigenvalues and add them. You may want to verify that the sum of absolute values of eigenvalues of 3×3 matrix A₃ is exactly the same as the corresponding sum for 4×4 matrix A₄.

The square of the Schatten 2-norm is the square of the Frobenius norm

\[ \| {\bf A} \|_2^2 = \sum_{i=1}^3 \sigma_i \left( {\bf A} \right) = \sum_{i=1}^3 \lambda_i \left( {\bf A}^{\ast}{\bf A} \right) = 99 \]
because
N[Sum[Eigenvalues[A3][[i]], {i, 1, 3}]]
99
Then taking a square root, we obtain ‖A‖₂ to be
\[ \| {\bf A} \|_2 = \sqrt{99} = 3\sqrt{11} = \| {\bf A} \|_F , \]
which is the Frobenius norm. The last Schatten norm to consider is the infinite-norm:
\[ \| {\bf A} \|_{\infty} = \max \left\{ \sigma_1, \sigma_2 , \sigma_2 \right\} \approx 8.30532 \]
N[SingularValueList[A][[1]]]
8.30532017177966
which is the Euclidean norm.