Numerical Analysis/Matrix norm

Definitions
The term Norm is often used without additional qualification to refer to a particular type of norm such as a Matrix norm or a  Vector norm. Most commonly the unqualified term Norm refers to flavor of Vector norm technically known as the L2 norm. This norm is variously denoted $$ \left \| x \right \|_2$$, $$\left \| x \right \|$$, or $$|x| $$ and give the length of an n-vector $$ x=(x_1, x_2, ...x_n).$$ Norms provide vector spaces and their linear operators with measures of size, length and distance more general than those we already use routinely in everyday life.

Induced Norm
If vector norms on Km and Kn are given (K is field of real or complex numbers), then one defines the corresponding  induced norm or operator norm on the space of m-by-n matrices as the following maxima:
 * $$ \begin{align}

\|A\| &= \max\{\|Ax\| : x\in K^n \mbox{ with }\|x\|= 1\} \\ &= \max\left\{\frac{\|Ax\|}{\|x\|} : x\in K^n \mbox{ with }x\ne 0\right\}. \end{align} $$

If m = n and one uses the same norm on the domain and the range, then the induced operator norm is a sub-multiplicative matrix norm.

The operator norm corresponding to the p-norm for vectors is:
 * $$ \left \| A \right \| _p = \max \limits _{x \ne 0} \frac{\left \| A x\right \| _p}{\left \| x\right \| _p}. $$

In the case of $$p=1$$ and $$p=\infty$$, the norms can be computed as:
 * $$ \left \| A \right \| _1 = \max \limits _{1 \leq j \leq n} \sum _{i=1} ^m | a_{ij} |, $$    which is simply the maximum absolute column sum of the matrix.
 * $$ \left \| A \right \| _\infty = \max \limits _{1 \leq i \leq m} \sum _{j=1} ^n | a_{ij} |, $$    which is simply the maximum absolute row sum of the matrix.

Theorem: Induced Norms are really norms
If $$\left \| \cdot \right \| $$ is a vector norm on $$ \mathbb R, $$ then $$ \left \| A \right \|= \max_{ \left \| x \right \|=1} \left \| Ax \right \| $$ is a matrix norm.

Proof:

To show $$\| A\|= \max_{\| x \|=1}  \|Ax\| $$ is a matrix norm we need to show several things.

$$\| A\|=0 $$ if and only if $$ A = 0 $$.

If $$A=0$$ then $$ \left \| Ax \right \|=0 $$ for all vectors x with $$ \left \| x \right \|=1$$ and so $$\| A\|=0 $$.

If $$\| A\|=0 $$ then $$\| Ax\|=0 $$ for all $$x$$. Using $$x=(1,0,...,0)^t$$, $$x=(0,1,0,...,0)^t,...,$$ and $$x=(0,...,0,1)^t$$ successively implies that each column of $$A$$ is zero. Thus, $$ \left \| A \right \|=0 $$ if and only if $$ A = 0. $$

$$\| \alpha A \| =|\alpha| \| A\|$$ for scalars $$ \alpha $$

Using the definition of Induced norms and the properties of the vector norm we have,
 * $$\| \alpha A \| = \max \limits _{\left \| x \right \|_ {=1}} \| \alpha Ax \|= |\alpha|\max \limits _{\left \| x \right \|_ {=1}}\left \| \ Ax \right \| = |\alpha|. \left \| A \right \|\,.$$

$$\| A + B \| \leq \| A \| +\| B \|$$

Again using the definition of Induced norms and the triangle inequality for the vector norm, we have
 * $$\| A + B \|

=\max \limits _{ \| x \| =1} \| (A + B)x \| \leq \max \limits _{ \| x \| =1} ( \| Ax \| + \|  Bx\|) \leq \max \limits _{\| x\| =1} \| Ax \| + \max \limits _{ \| x \|=1}\| Bx \| = \| A \| + \| B \|\,. $$

Theorem: Induced norms are submultiplicative
All induced norms are sub-multiplicative.

Proof:

We want to show $$\| AB\|\le \| A \| \|  B \|$$. We have
 * $$ \| AB \|= \max \limits _{\| x \|_ {=1}}\|  (AB)x \|= \max \limits _{\| x \|_ {=1}} \|  A(Bx) \|

\leq \max \limits _{\| x \|=1}\| A\| \| Bx\|= \| A \| \max \limits _{ \| x \|=1} \| Bx \|= \|  A\| \| B \|$$.

Derivation of A∞ formula
If $$ A= (a_{ij})$$ is an $$ {n\times n} $$ matrix, then $$ \left \| A \right \| _\infty = \max \limits _{1 \leq i \leq n} \sum _{j=1} ^n | a_{ij} |.$$

Proof::

First we show that

Let $$x$$ be an n-dimensional vector with $$ 1= \left \| x \right \| _\infty = \max \limits _{1 \leq i \leq n} | {x_i} |. $$ Since $$ Ax $$ is also an n-dimensional vector,
 * $$ \left \| Ax \right \| _\infty

= \max \limits _{1 \leq i \leq n}| {Ax}_i | = \max \limits _{1 \leq i \leq n} \left| \sum _{j=1} ^n a_{ij}x_j \right| \leq \max \limits _{1 \leq i \leq n}\sum _{j=1} ^n |a_{ij}| \max \limits _{1 \leq i \leq n} | {x_i} |. $$ But $$\max \limits _{1 \leq i \leq n} =\left \| x_j \right \|= \left \| X \right \|_\infty = 1, $$ so
 * $$ \left \| Ax \right \| _\infty \leq \max \limits _{1 \leq i \leq n} \sum _{j=1} ^n | a_{ij} |$$

and we have shown ($$).

Now we will show the opposite inequality, that

Let p be an integer with
 * $$\sum _{j=1} ^n | a_{pj} | = \max \limits _{1 \leq i \leq n} \sum _{j=1} ^n | a_{ij} |,$$

and $$x$$ be the vector with components
 * $$ x_j = \begin{cases} 1, &  \text{if} \qquad a_{pj}\geq 0, \\ -1, &  \text{if} \qquad a_{pj}< 0.\end{cases}$$

Then $$ \| x \| _\infty = 1 $$ and $$ a_{pj}x_j = | a_{pj} |, $$ for all $$ j = 1, 2, ..., n, $$ so
 * $$ \left \| Ax \right \| _\infty = \max \limits _{1 \leq i \leq n}\left| \sum _{j=1} ^n a_{ij}x_j  \right| \geq \left|  \sum _{j=1} ^n  a_{pj}x_j \right| = \left| \sum _{j=1} ^n  |a_{pj}|  \right| = \max \limits _{1 \leq i \leq n} \sum _{j=1} ^n | a_{ij} |$$

and we have shown ($$). Together, ($$) and ($$) yield
 * $$\left \| A \right \| _\infty = \max \limits _{1 \leq i \leq n} \sum _{j=1} ^n | a_{ij} |\,.$$

Example computing A∞
If
 * $$\mathbf{A} = \begin{bmatrix}

1 & 2 & -1\\ 0 &  3 & -1 \\ 5  &-1 &  1\\ \end{bmatrix},$$  {find $$\|A\|_\infty$$. { 7_1 } Solution:
 * type="{}"}


 * $$ \sum _{j=1} ^3 | a_{1j} | = |1| + |2| + |-1| =4, \qquad \sum _{j=1} ^3 | a_{2j} | = |0| + |3| + |-1| =4,$$

and
 * $$ \sum _{j=1} ^3 | a_{3j} | = |5| + |-1| + |1| =7$$

so, $$ \left \| A \right \| _\infty = \max \{ 4, 4, 7 \}= 7.$$

Equivalence Of Norms
Equivalence Of Norms is defined as:

For any two norms ||&middot;||α and ||&middot;||β, we have $$r\left\|A\right\|_\alpha\leq\left\|A\right\|_\beta\leq s\left\|A\right\|_\alpha$$ for some positive numbers r and s, for all matrices A in $$K^{m \times n}$$.

This is true because the vector space $$K^{m \times n}$$ has the finite dimension $$m \times n$$.

Examples of matrix norm equivalence
For matrix $$A\in\mathbb{R}^{m\times n}$$ the following inequalities hold


 * $$\|A\|_2\le\|A\|_F\le\sqrt{r}\|A\|_2$$, where $$r$$ is the rank of $$A$$
 * $$\|A\|_F \le \|A\|_{*} \le \sqrt{r} \|A\|_F$$, where $$r$$ is the rank of $$A$$
 * $$\|A\|_{\text{max}} \le \|A\|_2 \le \sqrt{mn}\|A\|_{\text{max}}$$
 * $$\frac{1}{\sqrt{n}}\|A\|_\infty\le\|A\|_2\le\sqrt{m}\|A\|_\infty$$
 * $$\frac{1}{\sqrt{m}}\|A\|_1\le\|A\|_2\le\sqrt{n}\|A\|_1.$$

Here, ||&middot;||p refers to the matrix norm induced by the vector p-norm.

Example
We will show some of these norm equivalences for the matrix

$$ \mathbf{A} = \begin{bmatrix} 1 & -2 & 3\\ -4 &  5 & -6 \\ 7  &-8 &  9\\ \end{bmatrix} $$

Solution:

First we compute several norms:


 * $$ \rho(A) \approx |16.1168| = 16.1168, $$
 * $$ \| A \|_\infty= \max\{|1| + |-2|+|3|, |-4|+ |5|+ |-6|, |7| + |-8| + |9|\} = 24, $$
 * $$ \| A \|_1= \max\{|1| + |-4|+|7|, |-2|+ |5|+ |-8|, |3| + |-6| + |9|\} = 18, $$
 * $$ \| A \|_2 = \sqrt{\rho(AA^*)} \approx \sqrt{283.8585} \approx 16.8481, $$
 * $$\sqrt{r}\|A\|_2 = \sqrt{r} \sqrt{\rho(AA^*)} \approx \sqrt{3} \sqrt{283.8585} \approx 29.1818, $$ Where r is the rank of the matrix.
 * $$\| A \|_F = \sqrt{|1|^2 +|-2|^2+|3|^2+|-4|^2+|5|^2+|-6|^2+|7|^2+|-8|^2+|9|^2+}= \sqrt{285} \approx 16.8819,$$
 * $$\| A \|_{\max} = \max\{|1|,|-2|,|3|,|-4|,|5|,|-6|,|7|,|-8|,|9|\} = |9|.$$

We can then verify the norm equivalence
 * $$ \left \| A \right \|_{max} < \rho(A) < \left \| A \right \|_2 < \left \| A \right \|_F < \left \| A \right \|_1 < \left \| A \right \|_\infty < \sqrt{r}\|A\|_2. $$

with our numbers
 * $$|9| < 16.1168 < 16.8481 < 16.8819 < 18 < 24 < 29.1818,$$

and
 * $$\|A\|_2\le\|A\|_F\le\sqrt{r}\|A\|_2$$

with our numbers
 * $$16.8481 \le\ 16.8819 \le\ 29.1818.$$

Reference

 * Numerical Analysis by Richard L. Burden and J. Douglas Faires (EIGHT EDITION)
 * Elementary Numerical Analysis by Kendall Atkinson (Second Edition)
 * Applied Numerical Analysis by Gerald / Wheatley (Sixth Edition)
 * Theory and Problems of Numerical Analysis by Francis Scheid