Eigenvalues and Eigenvectors Examples 1

# Eigenvalues and Eigenvectors Examples 1

Recall from the Eigenvalues and Eigenvectors page that the number $\lambda \in \mathbb{F}$ is said to be an eigenvalue of the linear operator $T \in \mathcal L (V)$ if $T(u) = \lambda u$ for some nonzero vector $u \in V$. The vectors $u$ such that $T(u) = \lambda u$ are called eigenvectors corresponding to the eigenvalue $\lambda$.

We will now look at some examples regarding eigenvalues of linear operators and eigenvectors corresponding to eigenvalues.

## Example 1

Suppose that $V$ is a finite-dimensional vector space over $\mathbb{F}$, $T$ is a linear operator on $V$, and $\mathrm{dim} (\mathrm{null} (T)) = m > 0$. Prove that $T$ has at most $\mathrm{dim} (V) - m + 1$ distinct eigenvalues.

Suppose that $\mathrm{dim} (\mathrm{null} (T)) = m > 0$. Then $\mathrm{null} (T) \neq \{0 \}$ and so $\lambda = 0$ is an eigenvalue of the linear operator $T$. If $\lambda = 0$ is the only eigenvalue of $T$, then we are done. If not, then suppose that $T$ has $k$ non-zero eigenvalues, call them $\lambda_1, \lambda_2, ..., \lambda_k \in \mathbb{F}$. We will show that $k ≤ \mathrm{dim} (V) - m$.

Now let $v_i \in V$ be an eigenvector associated with the eigenvalue $\lambda_i$ for $i = 1, 2, ..., k$. This set of eigenvectors $\{ v_1, v_2, ..., v_k \}$ is linearly independent from an earlier theorem, so if we let $W = \mathrm{span} (v_1, v_2, ..., v_n)$ then $\mathrm{dim} (W) = k$.

No notice that $W \cap \mathrm{null}(T) = \{ 0 \}$. We can see this since if $w = a_1v_1 + a_2v_2 + ... + a_kv_k$ and $w \in W \cap \mathrm{null}(T)$ then:

(1)
\begin{align} \quad 0 = T(w) = T(a_1v_1 + a_2v_2 + ... + a_kv_k) = a_1T(v_1) + a_2T(v_2) + ... + a_kT(v_k) = a_1\lambda_1 v_1 + a_2 \lambda_2 v_2 + ... + a_k \lambda_k v_k \end{align}

Since $\{ v_1, v_2, ..., v_k \}$ is a linearly independent set, then we have that $a_1\lambda_1 = 0$, $a_2 \lambda_2 = 0$, …, $a_k \lambda_k = 0$. Since $\lambda_i \neq 0$ for $i = 1, 2, ..., k$ (from the hypothesis made earlier) then we have that $a_i = 0$ for $i = 1, 2, ..., k$ and so:

(2)
\begin{align} \quad 0 = \mathrm{dim} (W \cap \mathrm{null}(T)) = \mathrm{dim} (W) + \mathrm{dim} (\mathrm{null}(T)) - \mathrm{dim} (W + \mathrm{null}(T)) = \end{align}

Hence we have that:

(3)
\begin{align} \quad \mathrm{dim} (W) + \mathrm{dim} ( \mathrm{null}(T)) = \mathrm{dim} (W + \mathrm{null} (T)) \end{align}

Since both $W$ and $\mathrm{null}(T)$ are subspaces of $V$ we have that $\mathrm{dim} (W + \mathrm{null} (T)) ≤ \mathrm{dim} (V)$ and so:

(4)
\begin{align} \quad \mathrm{dim} (W) + \mathrm{dim} ( \mathrm{null}(T)) ≤ \mathrm{dim} (V) \\ \quad k + m ≤ \mathrm{dim}(V) \end{align}

Thus we have that $k ≤ \mathrm{dim} (V) - m$.

## Example 2

Let $T$ be a linear operator on $V$ and let $\mathrm{dim} ( \mathrm{range} (T)) = k$. Prove that $T$ has at most $k + 1$ distinct eigenvalues.

Suppose that $\lambda_1, \lambda_2, ..., \lambda_m$ are the distinct eigenvalues of $T$. Let $v_1, v_2, ..., v_m$ be corresponding eigenvectors to these eigenvalues. We note that of these eigenvalues, that at most one can be zero (since these $\lambda$ are distinct). So for $\lambda_j \neq 0$, $j = 1, 2, ..., m$ we have that:

(5)
\begin{align} \quad T(v_j) = \lambda_j v_j \\ \quad \frac{1}{\lambda_j} T(v_j) = v_j \\ \quad T \left ( \frac{v_j}{\lambda_j} \right ) = v_j \end{align}

So we see that since at most one of $\lambda_1, \lambda_2, ..., \lambda_m$ equals zero, then we have that at least $m - 1$ of the vectors $v_1, v_2, ..., v_m$ are contained in $\mathrm{range} (T)$. However, these vectors are linearly independent and so:

(6)
\begin{align} m - 1 ≤ \mathrm{dim} ( \mathrm{range} (T)) = k \\ m ≤ k + 1 \end{align}

Therefore we see that $m$, the number of distinct eigenvalues of $T$ is less than or equal to $k + 1$.

## Example 3

Suppose that $T$ is an invertible linear operator on $V$. Prove that the $\lambda \neq 0$ is an eigenvalue of $T$ if and only if $\frac{1}{\lambda}$ is an eigenvalue of $T^{-1}$.

$\Rightarrow$ First suppose that $\lambda \neq 0$ is an eigenvalue of $T$. Then for some vector $u \in V$ we have that $T(u) = \lambda u$ and since $\lambda \neq 0$ then $\frac{1}{\lambda} T(u) = u$ so $T \left ( \frac{1}{\lambda} u \right ) = u$.

Since $T$ is invertible, then $T^{-1}$ exists, and applying $T^{-1}$ to both sides of the equation above and we get that:

(7)
\begin{align} \quad T^{-1} \left ( T \left ( \frac{1}{\lambda} u \right ) \right ) = T^{-1} (u) \\ \quad I \left ( \frac{1}{\lambda} u \right ) = T^{-1} (u) \\ \quad \frac{1}{\lambda} u = T^{-1}(u) \end{align}

So for $u \in V$ we have that $T^{-1}(u) = \frac{1}{\lambda} u$ so $\frac{1}{\lambda}$ is an eigenvalue of $T^{-1}$.

$\Leftarrow$ Now suppose that $\frac{1}{\lambda}$ is an eigenvalue of $T^{-1}$. Then for some vector $u \in V$ we have that $T^{-1} (u) = \frac{1}{\lambda} u$ and $T^{-1} (\lambda u) = u$. Since $T^{-1}$ is also invertible with $T$ as its inverse, then applying $T$ to both sides of this equation and we have that:

(8)
\begin{align} \quad T(T^{-1} (\lambda u) = T(u) \\ \quad I(\lambda u) = T(u) \\ \quad \lambda u = T(u) \end{align}

So for $u \in V$ we have that $T(u) = \lambda u$ so $\lambda \neq 0$ is an eigenvalue of $T$.