Basic Theorems Regarding the Coefficients of a Fourier Series

# Basic Theorems Regarding the Coefficients of a Fourier Series

Let $\mathcal S = \{ \varphi_0(x), \varphi_1(x), ... \}$ be an orthonormal system of functions on $I$ and let $f \in L^2(I)$ and consider the Fourier series of $f$ relative to $\mathcal S$:

(1)
\begin{align} \quad f(x) \sim \sum_{n=0}^{\infty} c_n \varphi_n(x) \end{align}

Where for each $n \in \{ 0, 1, 2, ... \}$, $\displaystyle{c_n = (f(x), \varphi_n(x)) = \int_I f(x) \overline{\varphi_n(x)} \: dx}$.

So far we have seen two extremely useful and important results regarding the coefficients $c_0, c_1, ...$ of the Fourier series.

• On the Bessel's Inequality for the Sum of Coefficients of a Fourier Series page we saw that the sum of squares of the absolute values of the coefficients of the Fourier series is bounded by the square of the norm of $f$, i.e., $\displaystyle{\sum_{n=0}^{\infty} \mid c_n \mid^2 \leq \| f(x) \|^2}$. This implies that $\displaystyle{\lim_{n \to \infty} \mid c_n \mid^2 = 0}$ which further implies that $c_n \to 0$ as $n \to \infty$.
• On the Parseval's Formula for the Sum of Coefficients of a Fourier Series page, for $\displaystyle{s_n(x) = \sum_{k=0}^{n} c_k\varphi_k(x)}$ (the $n^{\mathrm{th}}$ partial sum of the Fourier series of $f$ relative to $S$) we saw that $\displaystyle{\sum_{n=0}^{\infty} \mid c_n \mid^2 = \| f(x) \|^2}$ if and only if $\displaystyle{\lim_{n \to \infty} \| f(x) - s_n(x) \| = 0}$. In other words, the sum of the squares of the absolute values of the coefficients of the Fourier series equals the square of the norm of $f$ if and only if the norm between $f$ and the partial sums of the Fourier series of $f$ relative to $S$ go to $0$ as $n \to \infty$.

We will now look at some more properties of the coefficients of a Fourier series.

 Theorem 1: Let $\{ \varphi_0, \varphi_1, \varphi_2, ... \}$ be an orthonormal system of continuous functions on the interval $I = [a, b]$. Let $f$ and $g$ be continuous on $[a, b]$. Then the following statements are equivalent: a) If $(f, \varphi_n) = (g, \varphi_n)$ for all $n \in \{ 0, 1, 2, ... \}$ then $f = g$ (i.e., if $f$ and $g$ are continuous on $[a, b]$ and $f$ and $g$ are distinct, then $f$ and $g$ have different coefficients for their Fourier series). b) If $(f, \varphi_n) = 0$ for all $n \in \{0, 1, 2, ... \}$ then $f= 0$ on $I$ (i.e., the only continuous function on $[a, b]$ whose Fourier coefficients are all $0$ is the constant function $f(x) = 0$). c) If $T$ is an orthonormal system of functions on $[a, b]$ for which $\{ \varphi_0, \varphi_1, \varphi_2, ... \} \subseteq T$ then $T = \{ \varphi_0, \varphi_1, \varphi_2, ... \}$ (i.e., the orthonormal system $\{ \varphi_0, \varphi_1, \varphi_2, ... \}$ cannot be made larger by adding more functions).

It is vitally important to recognize that this theorem applies to systems of CONTINUOUS functions on the closed and bounded interval $I = [a, b]$ and for CONTINUOUS functions $f$ and $g$ on $I$.

• **Proof of $a) \implies b)$: Suppose that $(f, \varphi_n) = (g, \varphi_n)$ for all $n \in \{0, 1, 2, ... \}$ implies that $f = g$. Suppose that $(f, \varphi_n) = 0$ for all $n \in \mathbb{N}$. Note that if $g = 0$ then $(g, \varphi_n) = 0$ for all $n$ as well, i.e., $(f, \varphi_n) = (g, \varphi_n)$ for all $n$. So $f = 0$. $\blacksquare$
• **Proof of $b) \implies c)$: Suppose that if $(f, \varphi_n) = 0$ for all $n \in \mathbb{N}$ then $f = 0$. Let $T$ be an orthonormal system of functions on $I$ such that $\{ \varphi_0, \varphi_1, \varphi_2, ... \} \subseteq T$. We want to show that $T \subseteq \{ \varphi_0, \varphi_1, \varphi_2, ... \}$. Suppose not. Then there exists a $\phi \in T$ such that $\phi \not \in \{ \varphi_0, \varphi_1, \varphi_2, ... \}$. Since $T$ is an orthonormal system we have that $(\phi, \varphi_n) = 0$ for all $n \in \{0, 1, 2, ... \}$ which implies (by the hypothesis) that $\phi = 0$. But since $T$ is orthonormal, $(\phi, \phi) = (0, 0) = 1$ which is a contradiction. So our earlier assumption was wrong and we must have that $T = \{ \varphi_0, \varphi_1, \varphi_2, ... \}$.
• Proof of $c) \implies a)$: Suppose that if $T$ is an orthonormal system of functions for which $\{ \varphi_0, \varphi_1, \varphi_2, ... \} \subseteq T$ then $T = \{ \varphi_0, \varphi_1, \varphi_2, ... \}$.
• Suppose that $(f, \varphi_n) = (g, \varphi_n)$ for all $n \in \{0, 1, 2, ... \}$. Then $(f - g, \varphi_n) = 0$ for all $n$. Assume that $f \neq g$. Then define a new function $\phi$ on $I$ by:
(2)
\begin{align} \quad \phi = \frac{f - g}{\| f - g \|} \end{align}
• Let $T = \{ \varphi_0, \varphi_1, \varphi_2, ... \} \cup \{ \phi \}$. Then $T$ is an orthonormal system of continuous functions on $I$ and so by hypothesis, $T = \{ \varphi_0, \varphi_1, \varphi_2, ... \}$ so $\phi = \varphi_k$ for some $k \in \{0, 1, 2, ... \}$. Now $(\phi, \varphi_n) = 0$ for all $n$. But this is a contradiction since $(\phi, \varphi_k) = 1 \neq 0$ due to the orthonormality of $T$. So the assumption that $f \neq g$ was false, i.e., $f = g$. $\blacksquare$