Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors

Recall from the Invariant Subspaces page that a subspace $U$ of $V$ is said to be invariant under the linear operator $T \in \mathcal L (V)$ if $u \in U$ implies that $T(u) \in U$. Now suppose that the vector space $V$ is the direct sum of the nontrivial subspaces $U_1$, $U_2$, …, $U_m$, that is:

(1)
\begin{align} V = U_1 \oplus U_2 \oplus ... \oplus U_m \end{align}

We can understand how a linear operator $T \in \mathcal L (V)$ behaves by looking at the linear operator $T$ over each subspace $U_j$ if these subspaces are invariant under $T$. The easiest of such subspaces to analyze are subspaces whose dimension is $1$. If $u \in V$ is a non-zero vector, then any $1$-dimensional subspace $U$ of $V$ is the set of all scalar multiples of $u$, that is:

(2)
\begin{align} U = \{ au : a \in \mathbb{F} \} \end{align}

Suppose that the subspace $U$ defined above is invariant under $T$. Then we have that $u \in U$ implies that $T(u) \in U$, and so there must exist a scalar $\lambda \in \mathbb{F}$ such that:

(3)
\begin{align} T(u) = \lambda u \end{align}

These values, $\lambda$ are important and we will define them as follows:

 Definition: Let $T \in \mathcal L (V)$. Then $\lambda$ is called an Eigenvalue or Characteristic Value of $T$ if there exists a nonzero vector $u$ such that $T(u) = \lambda u$, and $u$ is called a corresponding Eigenvector.

We will now look at some important theorems on eigenvalues and eigenvectors.

 Theorem 1: Let $V$ be a finite-dimensional vector space and let $T \in \mathcal L (V)$. The following statements are equivalent: a) $\lambda$ is an eigenvalue of $T$. b) The linear operator $(T -\lambda I)$ is not injective. c) The linear operator $(T - \lambda I)$ is not surjective. d) The linear operator $(T - \lambda I)$ is not invertible.
• Proof: $a \implies b$. Suppose that $\lambda$ is an eigenvalue of $T$. Then there exists a nonzero vector $v \in V$ such that $T(v) = \lambda v$ and so $T(v) = (\lambda I) (v)$ so $(T - \lambda I)(v) = 0$. Since the nonzero vector $v$ satisfies this operator, then we have that $\mathrm{null} (T - \lambda I) \neq \{ 0 \}$ and so $(T - \lambda I)$ is not injective.
• $b \implies c$. Suppose that $(T - \lambda I)$ is not injective. From the Linear Operators page, we had a theorem which implies that then $(T - \lambda I)$ is not surjective.
• $c \implies d$. Suppose that $(T - \lambda I)$ is not surjective. From the same theorem mentioned above, we have that then $(T - \lambda I)$ is not invertible.
• $d \implies a$. Suppose that $(T - \lambda I)$ is not invertible. From the same theorem mentioned above, we have that then $(T - \lambda I)$ is not injective and so there exists a nonzero vector $v \in V$ such that $(T - \lambda I)(v) = 0$ that is $T(v) = \lambda v$ so $\lambda$ is an eigenvalue of $T$. $\blacksquare$
 Theorem 2: Let $T \in \mathcal L (V)$. If $\lambda_1, \lambda_2, ..., \lambda_m$ are distinct eigenvalues of $T$ and $v_1, v_2, ..., v_m$ are the corresponding eigenvectors of $T$, then $\{ v_1, v_2, ..., v_m \}$ is a linearly independent set.
• Proof: Suppose instead that $\{ v_1, v_2, ..., v_m \}$ is actually a linearly dependent set of vectors. We will prove that this will result in a contradiction.
• Since the set of vectors $\{ v_1, v_2, ..., v_m \}$ is linearly dependent, then let $k$ be the smallest natural number for which $v_k \in \mathrm{span} (v_1, v_2, ..., v_{k-1})$ and $\{ v_1, v_2, ..., v_k \}$ be a linearly independent set, which is guaranteed by the Linear Dependence Lemma. Then there exists scalars $a_1, a_2, ..., a_{k-1}$ such that $v_k = a_1v_1 + a_2v_2 + ... + a_{k-1}v_{k-1}$. If we apply the linear operator $T$ to both sides of this equation, then we have that:
(4)
\begin{align} \quad T(v_k) = T(a_1v_1 + a_2v_2 + ... + a_{k-1}v_{k-1}) \\ \quad \lambda_k v_k = a_1 \lambda_1 v_1 + a_2 \lambda_2 v_2 + ... + a_{k-1} \lambda_{k-1} v_{k-1} \end{align}
• Now since $v_k = a_1v_1 + a_2v_2 + ... + a_{k-1}v_{k-1}$ then $\lambda_k v_k = \lambda_k a_1v_1 + \lambda_k a_2v_2 + ... + \lambda_k a_{k-1}v_{k-1}$ as well. Subtracting the earlier equation for $\lambda_k v_k$ and we have that:
(5)
\begin{align} \quad 0 = a_1 (\lambda_k - \lambda_1) v_1 + a_2 (\lambda_k - \lambda_2) v_2 + ... + a_{k-1} (\lambda_k - \lambda_{k-1})v_{k-1} \end{align}
• Since $\{ v_1, v_2, ..., v_k \}$ is a linearly independent set and each of $\lambda_1, \lambda_2, ..., \lambda_k$ is distinct, then we must have that $a_1 = a_2 = ... = a_{k-1} = 0$. But then this implies that $v_k = 0$ which is a contradiction as $v_k$ is nonzero. Hence our assumption that $\{ v_1, v_2, ..., v_m \}$ was linearly dependent was false. $\blacksquare$
 Corollary 1: If $T \in \mathcal L (V)$ then $T$ has at most $\mathrm{dim} (V)$ distinct eigenvalues.
• Proof: Suppose that $T \in \mathcal L (V)$ and that $\mathrm{dim} (V) = n$. Then $V$ has a basis of length $n$. However, no linearly independent list of vectors from $V$ has length greater than $n$.
• By Theorem 1, we have that if $\lambda_1, \lambda_2, ..., \lambda_m$ are distinct eigenvalues of $T$, then the set of eigenvectors $\{ v_1, v_2, ..., v_m \}$ is linearly independent. Therefore $m ≤ n = \mathrm{dim} (V)$. $\blacksquare$.

Let's now look at some examples of finding eigenvalues.

Example 1: The Eigenvalues and Eigenvectors of The Identity Operator

Let $V$ be a vector space and suppose that $I \in \mathcal L (V)$ is the identity operator on $V$, that is for all $v \in V$ we have that $I(v) = v$. Suppose now that:

(6)
\begin{align} I(v) = \lambda v \end{align}

Since $I(v) = v$, we then have that $v = \lambda v$ for all $v \in V$ and so $\lambda = 1$ is an eigenvalue for the identity operator. Additionally, every nonzero vector $v \in V$ is an eigenvector of the identity operator.

More generally, if $aI \in \mathcal L (V)$ then $(aI)(v) = av$, and $(aI)(v) = \lambda v$ implies that $av = \lambda v$ and so $\lambda = a$ is an eigenvalue of $aI$, and every nonzero vector $v \in V$ is an eigenvector of $aI$.

Example 2

Suppose that $T \in \mathcal L (\mathbb{R}^2)$ is defined by $T(x, y) = (2x, y)$. Find all eigenvalues (if they exist) of $T$.

Let $(x, y) \in V$. We want to find $\lambda$ such that:

(7)
\begin{align} \quad (2x, y) = T(x, y) = \lambda ( x, y) = (\lambda x, \lambda y) \end{align}

Therefore $2x = \lambda x$ and $y = \lambda y$. The first equation implies that $\lambda = 2$ while the second equation implies that $\lambda = 1$. Since both equations cannot be solved simultaneously, we thus have that no such eigenvalue exists. This should intuitively make sense. We note that $T$ takes each vector $(x, y)$ and maps it to $(2x, y)$, that is $T$ is the linear transformation which stretches the $x$ coordinate by a vector of $2$ while the $y$ coordinate stays the same. Hence, there is no vector $(x, y) \in \mathbb{R}^2$ that is mapped to a multiple $\lambda$ of itself.