The Method of Successive Approximations
Consider the initial value problem of solving the first order differential equation $\frac{dy}{dt} = f(t, y)$ with the initial condition $y(t_0) = y_0$. Recall that if $f$ and $\frac{\partial f}{\partial y}$ are both continuous on an open rectangle $I = (\alpha, \beta) \times (\gamma, \delta)$ and if $(t_0, y_0) \in I$ then for some interval $(t_0 - h, t_0 + h) \subseteq (\alpha, \beta)$ there exists a unique solution $y = \phi(t)$ to the initial value problem described above.
We are now going to look at a new technique of finding such unique solutions to first order differential equations known as the Method of Successive Approximations (or Picard's Iterative Method). We assume that we want to solve the differential equation $\frac{dy}{dt} = f(t, y)$ with the initial condition $y(0) = 0$. This assumption will make the calculations that follow much simpler, and furthermore, we can always transform the differential equation $\frac{dy}{dt} = f(y, t)$ with the initial condition $y(t_0) = y_0$ into a different differential equation with the initial condition $y(0) = 0$ using substituions (by letting $s = t - t_0$ and $v = y - y_0$ as you should verify).
The theorem for the existence of a unique solution $y = \phi (t)$ to this differential equation can be rephrased as such that if $f$ and $\frac{\partial f}{\partial y}$ are continuous on a rectangle $R$ such that $-a \leq t \leq a$ and $-b \leq y \leq b$ then there exists an interval $(-h, h) \subseteq (-a, a)$ such that a unique solution $y = \phi (t)$ exists. Suppose that these conditions hold. Then let $y = \phi (t)$ be the unique solution. Then:
(1)This is a function in terms of the variable $t$ only. Suppose that we integrate this function starting at $0$ to an arbitrary value of $t$. We then get that:
(2)We should note that if $y = \phi (t)$ satisfies the above integral equation, then it also satisfies the initial value problem from earlier since by The Fundamental Theorem of Calculus we have that $\frac{d \phi}{dt} = f(t, \phi(t)) = f(t, y)$ is a solution to the differential equation and $\phi (0) = \int_0^0 (s, \phi(s)) \: ds = 0$ shows that $y = \phi(t)$ satisfies the initial condition.
Now consider the simplest function:
(3)Clearly this function satisfies the initial condition (since $\phi_0(0) = 0$), though this function need not be a solution to our differential equation. So $\phi_0(t) = 0$ approximates the unique solution to our differential equation (though likely not that well). For a closer approximation to $y = \phi (t)$, consider the following functions obtained recursively:
(4)Notice that if for some $k = 0, 1, 2, ...$ we have that $\phi_k(t) = \phi_{k+1}(t)$, then $y = \phi_k(t)$ will be equal to the solution to our differential equation. This can easily be seen with some algebraic manipulation:
(5)Unfortunately, most of the time $\phi_k(t) \neq \phi_{k+1}(t)$ for any $k$. In such cases, we will want to consider the infinite sequence of functions $\phi_n$ that approximate the unique solution $y = \phi(t)$ to our initial value problem.
(6)The following theorem says that the limit of this sequence of approximations will be equal to our unique solution.
Theorem 1: If $\frac{dy}{dt} = f(t, y)$ is a first order differential equation with the initial condition $y(0) = 0$ and if $f$ and $\frac{\partial f}{\partial y}$ are both continuous on some rectangle $R$ for which $-a \leq t \leq a$ and $-b \leq y \leq b$ then $\lim_{n \to \infty} \phi_n = \lim_{n \to \infty} \int_0^t f(s, \phi_{n-1}(s)) \: ds = \phi(t)$ where $y = \phi(t)$ is the guaranteed unique solution contained in the interval $(-h, h) \subseteq (-a, a)$. |
We will not prove Theorem 1 as the proof is a bit out of our scope.