# Taylor and Maclaurin Series

So far we have looked at power series, that is, series in the form $\sum_{n=0}^{\infty} a_n(x - c)^n$. Suppose that this series converges to $f(x)$ on the interval $(c - R, c + R)$ where $R > 0$ is the radius of convergence of this power series, and $c$ is the center of convergence. The following Theorem tells us that the value of the coefficient $a_n$ is determined by taking the $n^{\mathrm{th}}$ derivative of $f$, evaluating it at $c$, and dividing by $n!$.

Theorem 1: Let $\sum_{n=0}^{\infty} a_n(x - c)^n$ be a power series that converges on the interval $(c - R, c + R)$ for $R > 0$. Then $a_n = \frac{f^{(n)}(c)}{n!}$ for $n = 0, 1, 2, ...$. |

**Proof:**Consider the power series that converges to $f(x)$ on the interval $(c - R, c + R)$.

- We first note that $\frac{f^{(0)}(c)}{0!} = f(c) = a_0$. We will now differentiate the series above. Note that in doing so, the interval of convergence is still $(c - R, c + R)$ (since the differentiation of a power series can only result in losing a one or both of the endpoints, both of which the original series is not known to converge to). Thus:

- Note that $\frac{f^{(1)}(c)}{1!} = f'(c) = a_1$. If we differentiate the power series again, we get that:

- Note that $\frac{f^{(2)}(c)}{2!} = \frac{f''(c)}{2} = \frac{2a_2}{2} = a_2$. If we carry through this process inductively, we see that for $n = 0, 1, 2, ...$ we have that:

We will now define what a Taylor/Maclaurin series is.

Definition: Suppose that $f(x)$ has derivatives of all orders at the point $x = c$. Then the Taylor Series of $f$ about $x = c$ is given by $\sum_{n=0}^{\infty} f^{(n)}(c) \frac{(x - c)^n}{n!} = f(c) + f'(c)\frac{(x - c)}{1!} + f''(c)\frac{(x - c)^2}{2!} + f'''(c)\frac{(x - c)^3}{3!} + ...$. If $c = 0$, then we call this series a Maclaurin Series. |

Before we look at some examples of Taylor and Maclaurin series, it will first be important to mention that not all Taylor series of $f$ about $x = c$ converge to $f$. In the definition above, we can see that it is only guaranteed that a Taylor series will converge at its center of convergence $c$. In fact, it is possible that the Taylor or Maclaurin series of $f$ about $x = c$ will only converge at $x = c$ or if the Taylor/Maclaurin series does converge, it may not converge to $f(x)$. We will state an important definition to classify the two types of functions $f$; ones to which the Taylor series of $f$ converge to $f(x)$ on an open interval containing $x = c$, and ones that do not.

Definition: A function $f$ is said to be an Analytic Function at $c$ if the Taylor series of $f$ at $x = c$ converges to $f(x)$ over an open interval containing $c$. |

If we have a function a Taylor series of a function $f(x)$ given by $\sum_{n=0}^{\infty} f^{(n)}(c) \frac{(x - c)^n}{n!} = f(c) + f'(c)\frac{(x - c)}{1!} + f''(c)\frac{(x - c)^2}{2!} + f'''(c)\frac{(x - c)^3}{3!} + ...$, we will write $\: f(x) \sim \sum_{n=0}^{\infty} f^{(n)}(c) \frac{(x - c)^n}{n!}$ until we know for sure that $f$ is an analytic function. For now, we will look look at some examples of important Taylor (specifically Maclaurin series) and assume the functions converge to their respective power series.

- $e^x = \sum_{n=0}^{\infty} \frac{x^n}{n!} = 1 + \frac{x}{1!} + \frac{x^2}{2!} + \frac{x^3}{3!} + ...$, $\forall x \in \mathbb{R}$.

- $\sin x = \sum_{n=0}^{\infty} (-1)^n \frac{x^{2n+1}}{(2n + 1)!} = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + ...$, $\forall x \in \mathbb{R}$.

- $\cos x = \sum_{n=0}^{\infty} (-1)^n \frac{x^{2n}}{(2n)!} = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \frac{x^8}{8!} - ...$, $\forall x \in \mathbb{R}$.

We will eventually derive all of the Maclaurin series given above and more, however, let's first address the analytic nature of a function. First, define $T_n(x) = \sum_{k=1}^{n} f^{(k)}\frac{(x - c)^k}{k!}$ to be the **$n^{\mathbb{th}}$ degree Taylor polynomial** of $f$ about $x = c$, (or the **$n^{\mathbb{th}}$ partial sum of the the Taylor series** of $f$ about $x = c$). We will denote $R_n(x)$ called the **Lagrange remainder** or the **Error Between $f(x)$** and $T_n(x)$ (which we will touch upon soon) to be denoted as $R_n(x) = f(x) - T_n(x)$. Notice that $R_n(x)$ is simply the difference between $f(x)$ and $T_n(x)$, and so $R_n(x)$ is essentially, "what is left over" or the "remainder/error". From this we get the following equation:

Theorem 2: If for some function $f$, $f(x) = T_n(x) + R_n(x)$ and $\lim_{n \to \infty} R_n(x) = 0$ for $\mid x - c \mid < R$, then $f$ is analytic over the interval $(c - R, c + R)$ and $f(x) = \sum_{n=0}^{\infty} f^{(n)}(c) \frac{(x - c)^n}{n!}$ for $\mid x - c \mid < R$. |

Theorem 2 above should make some sense. If $\lim_{n \to \infty} R_n(x) = 0$, then since $R_n(x) = f(x) - T_n(x)$ we have $\lim_{n \to \infty} f(x) - T_n(x) = 0$ and so the difference between $f(x)$ and the $n^{\mathbb{th}}$ degree Taylor polynomial $T_n(x)$ approaches $0$, i.e, $f(x)$ appears more and more similar to $T_n(x)$ for $\mid x - c \mid < R$.