Numerical Analysis/Topics/Power iteration examples

Power method is an eigenvalue algorithm which can be used to find the eigenvalue with the largest absolute value but in some exceptional cases, it may not numerically converge to the dominant eigenvalue and the dominant eigenvector. We should know the definition for dominant eigenvalue and eigenvector before learning some exceptional examples.

Definitions
let $$\lambda_1$$,$$\lambda_2$$,....$$\lambda_n$$ be the eigenvalues of an n×n matrix A. $$\lambda_1$$ is called the dominant eigenvalue of A if $$|\lambda_1|$$ > $$|\lambda_i|$$, i= 1,2,3.....n. The eigenvectors corresponding to $$\lambda_1$$ are called dominant eigenvectors of A.

Next we should show some examples where the power method will not converge to the dominant eigenpair of a given matrix.

Example 1
Use power method to find an eigenvalue and its corresponding eigenvector of the matrix A.

$$ A=\left[\begin{array}{c c c}1 & 2 & 1 \\-4 & 7 & 1 \\-1 & -2 & -1 \end{array} \right] $$.

We obtained the eigenvalues of matrix A are $$\lambda_1=0$$, $$\lambda_2=2$$, $$\lambda_3=5$$ solving the characteristic polynomial. Here the dominant eigenvalue is 5. we can choose the initial guess vector is $$X_0=\left[\begin{array}{c}1 \\1 \\ 1 \\\end{array} \right]$$. Then we apply the power method.

$$Y_1$$ = $$A$$$$X_0$$ = $$\left[\begin{array}{c}4 \\4 \\ -4 \\\end{array} \right]$$, $$c_1=4$$, and it implies $$X_1$$ = $$\left[\begin{array}{c}1 \\1 \\ -1 \\\end{array} \right]$$. In this way,

$$Y_2$$ = $$A$$$$X_1$$ = $$\left[\begin{array}{c}2 \\2 \\ -2 \\\end{array} \right]$$, so $$c_2=2$$, and it implies $$X_2$$ = $$\left[\begin{array}{c}1 \\1 \\ -1 \\\end{array} \right]$$. As we can see, the sequence $$\left( c_{k} \right)$$ converges to 2 which is not the dominant eigenvalue.

Example 2
Consider the matrix $$A=\left[\begin{array}{c c c}3 & 2 & -2 \\-1 & 1 & 4 \\3 & 2 & -5 \end{array} \right] $$.

Apply the power method to find the eigenvalue of the matrix with starting guess

$$X_0$$ = $$\left[\begin{array}{c}1 \\1 \\ 1 \\\end{array} \right]$$.

$$Y_1$$ = $$A$$$$X_0$$ = $$\left[\begin{array}{c}3 \\4 \\ 0 \\\end{array} \right]$$,

thus $$c_1=4$$, and it implies

$$X_1$$ = $$\left[\begin{array}{c}0.75 \\1 \\ 0 \\\end{array} \right]$$.

We continue doing some iterations:

$$A$$$$X_1$$ = $$\left[\begin{array}{c}4.25 \\0.25 \\ 4.25 \\\end{array} \right]$$

so $$c_2=4.25$$, and it implies $$X_2$$ = $$\left[\begin{array}{c}1 \\0.0588 \\ 1 \\\end{array} \right]$$.

$$A$$$$X_2$$ = $$\left[\begin{array}{c}1.1176 \\3.0588 \\ -1.8824 \\\end{array} \right]$$

so $$c_3=3.0588$$, and it implies

$$X_3$$ = $$\left[\begin{array}{c}0.36537 \\1 \\ -0.615437 \\\end{array} \right]$$.

We can see the sequence $$\left( c_{k} \right)$$ and $$\left( X_{k} \right)$$ are divergent. Under this situation can we conclude that the dominant eigenvalues are complex conjugate of each other.