Applying The Power Method to Find a Dominating Eigenvalue
Recall from The Power Method that if $A$ is an $n \times n$ matrix with $n$ distinct eigenvalues, $\lambda_1, \lambda_2, ..., \lambda_n$ for which one is dominating, say $\lambda_1$ such that $\mid \lambda _1 \mid \: > \: \mid \lambda_2 \mid \: > \: ... \: > \: \mid \lambda_n \mid$, then we saw that we could approximate the eigenvalue $\lambda_1$ iteratively.
The scratch work on the page linked above was rather messy, so we will develop the method more generally here and then look at an example. Let $x^{(0)}$ be a nonzero vector such that $\| x^{(0)} \|_{\infty} = \max_{1 ≤ i ≤ n} \mid x_i \mid = 1$ so that each of the components in the vector $x^{(0)}$ are between $-1$ and $1$. Then define successive approximations by:
(1)The vectors $x^{(n)}$ are approximation to a corresponding eigenvector of the dominated eigenvalue when appropriately scaled. However, we're looking for the dominated eigenvalue. The following theorem will give us a way to find this eigenvalue from the eigenvector approximations.
Theorem 1: Let $x$ be an eigenvector of $A$. Then the eigenvalue $\lambda$ corresponding to this eigenvector is given by $\lambda = \frac{Ax \cdot x}{x \cdot x}$. |
The formula above is called the Rayleigh Quotient for the eigenvalue $\lambda$.
- Proof: Suppose that $x$ is an eigenvector of $A$. Then $Ax = \lambda x$ for the corresponding eigenvalue of $A$. Substituting this into the Rayleigh quotient and we have that:
Let's now look at an example of finding an eigenvalue using this method.
Example 1
Find the dominating eigenvalue of the matrix $A = \begin{bmatrix} 3 & 5\\ 3 & 1 \end{bmatrix}$.
You should verify using the characteristic polynomial that the eigenvalues of $A$ are $-2$ and $6$. In particular, $6$ is the dominating eigenvalue.
We will now use the power method to further show that $6$ is an eigenvalue (though the method explained above is much easier for this case where $A$ was a small matrix). Let $x^{(0)} = \begin{bmatrix} 0.5\\ 0.5 \end{bmatrix}$ be our initial approximation. Then:
(3)We will now scale our eigenvector approximations:
(4)Let's now compute $A\tilde{x^{(5)}}$ where $\tilde{x^{(5)}}$ is our scaled eigenvalue approximation of $x^{(5)}$. We have that:
(5)We now apply the formula given in Theorem 1:
(6)