Eigenvalues Review
Table of Contents

Eigenvalues Review

We are now going to review some of the content we've seen recently regarding eigenvalues.

  • First recall that if $V$ is a vector space over the field $\mathbb{F}$ then an Invariant Subspace $U$ under the linear operator $T \in \mathcal L (V)$ is a subspace such that for every vector $u \in U$ we also have that $T(u) \in U$. Alternatively, we can say that $U$ is an invariant subspace (under $T$) if $T$ restricted to the domain $U$ denoted $T_U$ then $T_U$ is a linear operator under $U$, i.e, $T_U \in \mathcal L(U)$.
  • For example, the zero subspace $\{ 0 \}$ of any vector space $V$ is invariant under any linear operator $T$ since $u \in \{ 0\}$ implies $u = 0$ and $T(u) = T(0) = 0 \in \{ 0 \}$. Furthermore, if $U_1$ and $U_2$ are invariant under $T$ then $U_1 \cap U_2$ is invariant under $T$ and if $U_1$, $U_2$, …, $U_m$ are invariant under $T$ then the sum $U_1 + U_2 + ... + U_m$ is invariant under $T$.
  • Two more important examples of invariant subspaces are $\mathrm{null}(T)$ and $\mathrm{range} (T)$. Note that $u \in \mathrm{null}(T)$ implies that $T(u) = 0 \in \mathrm{null}(T)$ (since $\mathrm{null}(T)$ is a subspace of $U$ which must contain the zero vector as a vector space) and $u \in \mathrm{range}(T)$ implies that $T(u) \in \mathrm{range}(T)$ (since $T$ is a linear operator).
  • We also looked at an important theorem which said that if $V$ is a finite-dimensional vector space and $U$ is a nontrivial subspace of $V$ then there exists a linear operator $T \in \mathcal L(V)$ such that $U$ is NOT invariant under $T$.
  • On the Eigenvalues and Eigenvectors page we defined an Eigenvalue to be a number $\lambda$ such that there exists a nonzero vector $u \in V$ such that $T(u) = \lambda u$, i.e, $\lambda$ is an eigenvalue of $T$ if a nonzero vector in $V$ is mapped to a scalar multiple $\lambda$ of itself. The corresponding vectors $u$ are called Eigenvectors corresponding to $\lambda$.
  • One simple example is the identity operator $I$ defined as $I(v) = v$ for every $v \in V$. Then $I(v) = \lambda v$ implies that $\lambda = 1$ is an eigenvalue of $T$.
  • Furthermore we noted that $\lambda$ is an eigenvalue of $T$ if and only if the operator $(T - \lambda I)$ is not injective. To quickly prove this again, note that if $\lambda$ is an eigenvalue of $T$ then $T(u) = \lambda u$ so $T(u) - \lambda u = 0$ and $(T - \lambda I)(u) = 0$ and $u$ is a nonzero vector which implies that $\mathrm{null} (T - \lambda I) \neq \{ 0 \}$ so $(T - \lambda I)$ is not injective. Conversely, if $(T - \lambda I)$ if not injective then there exist a nonzero vector $u$ such that $(T - \lambda I)(u) = 0$ which implies that $T(u) = \lambda u$. From this, we can also note that $\lambda$ is an eigenvalue of $T$ if and only if $(T - \lambda I)$ is not surjective and not invertible.
  • We then saw that if $\lambda_1$, $\lambda_2$, …, $\lambda_m$ are eigenvalues of $T$ then corresponding nonzero eigenvectors $v_1$, $v_2$, …, $v_m$ form a linearly independent set in $V$.
  • From this, we then that the size of a set of linearly independent vectors cannot exceed $\mathrm{dim} (V)$ that then $T$ has at most $\mathrm{dim} (V)$ distinct eigenvalues.
  • We then looked at the important of Upper Triangular Matrices of Linear Operators. We noted that if $V$ is a finite-dimensional nonzero vector space and if $B_v = \{ v_1, v_2, ..., v_n \}$ is a basis of $V$ then the matrix $\mathcal M (T, B_V)$ being upper triangular is equivalent to $T(v_k) \in \mathrm{span} (v_1, v_2, ..., v_k)$ for each $k = 1, 2, ..., n$ which is equivalent to $\mathrm{span} (v_1, v_2, ..., v_k)$ is invariant under $T$ for each $k = 1, 2, ..., n$.
  • Of course, finding a diagonal matrix (which are also upper triangular matrices so the previous theorems apply) that represents a linear operator $T$ is even more useful as we saw on the Diagonal Matrices of Linear Operators page. We said that $T \in \mathcal L(V)$ was Diagonalizable if there exists a basis $B_V$ of $V$ such that $\mathcal M (T, B_V)$ is a diagonal matrix.
  • Furthermore, if $T \in \mathcal L (V)$ has $\mathrm{dim} (V) = n$ distinct eigenvalues then there exists a basis $B_V$ such that $\mathcal M (T, B_V)$ is a diagonal matrix.
  • We then saw that if $T \in \mathcal L (V)$ is a finite-dimensional vector space and $\lambda_1$, $\lambda_2$, …, $\lambda_m$ are distinct eigenvalues of $T$ then there existing a basis $B_V$ such that $\mathcal M (T, B_V)$ is diagonal is equivalent to $V$ having a basis consisting of eigenvectors which is equivalent to the one-dimensional subspaces $U_j = \mathrm{span} (u_j)$ being invariant under $T$ for each $j = 1, 2, ..., n$ and such that $T = \bigoplus_{i=1}^n U_i$ which is equivalent to $V = \bigoplus_{i=1}^m \mathrm{null} (T - \lambda_i I)$ which is equivalent to $\mathrm{dim} (V) = \sum_{i=1}^{m} \mathrm{dim} ( \mathrm{null} (T - \lambda_iI))$.
  • On the Projection Operators page, if $U$ and $W$ are subspaces of $V$ such that $V = U \oplus W$ then for each $v \in V$ we have that $v = u + w$ where $u \in U$ and $w \in W$ and we defined the Projection Operator onto $U$ is the linear operator defined by $P_{U, W}(v) = u$ for all $v \in V$. It wasn't hard to then see that $\mathrm{range} (P_{U,W}) = U$ and $\mathrm{null} (P_{U,W}) = W$.
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License