Summary of Techniques: Solving First Order Differential Equations
Summary of Techniques for Solving First Order Differential Equations
We will now summarize the techniques we have discussed for solving first order differential equations.
- The Method of Direct Integration: If we have a differential equation in the form $\frac{dy}{dt} = f(t)$, then we can directly integrate both sides of the equation in order to find the solution. More precisely, the antiderivatives of $f$ are the solutions to this differential equation.
- The Method of Integrating Factors: If we have a linear differential equation in the form $\frac{dy}{dt} + p(t) y = g(t)$ or a differential equation that can be easily put into this form, then we can let $\mu (t) = e^{\int p(t) \: dt}$ be what is known as an integrating factor for our differential equation. Recall that $\mu(t)$ has the properties such that $\mu'(t) = \mu (t) p(t)$. We then multiply both sides of our differential by $\mu (t)$ and solve for $y$ as follows (provided that integrating $p(t)$ and $\mu (t) g(t)$ is not too cumbersome):
\begin{align} \quad \mu (t) \frac{dy}{dt} + \underbrace{\mu(t)p(t)}_{= \mu'(t)} y = \mu (t) g(t) \\ \quad \frac{d}{dt} \left ( \mu (t) y \right ) = \mu (t) g(t) \\ \quad \int \frac{d}{dt} \left ( \mu (t) y \right ) \: dt = \int \mu(t) g(t) \: dt \\ \quad \mu (t) y = \int \mu (t) g(t) \: dt \\ \quad y = \frac{1}{\mu (t)} \int \mu (t) g(t) \: dt \end{align}
- Solving Separable Differential Equations: One method to solve potentially nonlinear is by separating variables. Recall that a separable first order differential equation is in the form $M(x) + N(y) \frac{dy}{dx} = 0$. If our differential equation is in this form, then provided that integrating $M$ with respect to $x$ and $N$ with respect to $y$ is not too difficult, then we can solve for $y$ by isolating one variable to one side of the equation, and the other variable to the other side, then integrating. Note that using the method of separable equations often results in us obtaining implicit solutions.
\begin{align} \quad N(y) \frac{dy}{dx} = -M(x) \\ \quad N(y) \: dy = -M(x) \: dx \\ \quad \int N(y) \: dy = \int -M(x) \: dx \end{align}
- Solving Differential Equations with Substitutions: Sometimes a rather difficult looking first order differential equation can be drastically simplified by making the substitution $v = \frac{y}{x}$. If such a substitution looks applicable, then in doing so, we may end up with a first order differential equation that can be solved using some of the other techniques mentioned on this page.
- Exact Differential Equations: If we have a differential equation in the form of $M(x, y) + N(x, y) \frac{dy}{dx} = 0$, then this differential equation is said to be exact if there exists a function $\psi(x, y)$ such that $\psi_x(x, y) = M(x, y)$ and $\psi_y(x, y) = N(x, y)$. Suppose further that the partial derivatives of $M$ and $N$ are continuous on a rectangle $R$. By Clairaut's Theorem for the equality of mixed second partial derivatives, then if our differential equation is exact we have that $M_y = \psi_{xy} (x, y) = \psi_{yx} (x,y) = N_x$. We can use this fact to find $\psi$ and then we can rewrite our differential equation as $\psi_x + \psi_y \frac{dy}{dx} = 0$ and also as $\frac{d}{dx} \left ( \psi (x, y) \right ) = 0$, and so, an (often times) implicit solution to our differential equation can be obtained in the form:
\begin{align} \quad \psi (x, y) = C \end{align}
- Bernoulli Differential Equations: A nonlinear first order differential equation in a Bernoulli differential equation if it is in the form $y' + p(x)y = g(x) y^n$. We can solve this class of differential equations by using the substitution $v = y^{1-n}$. If we differentiate $v$, we get that $v' = (1-n)y^{-n}y'$. We then divide our original differential equation by $y^n$ and apply these substitutions to get a linear first order differential equation that can be solved using some of the other techniques mentioned on this page.
\begin{align} \quad y' + p(x) y = g(x)y^n \\ \quad y^{-n}y' + p(x) y^{1-n} = g(x) \\ \quad \frac{1}{1 - n} v' + p(x) v = g(x) \end{align}
- Euler's Method: If we have a first order differential equation $\frac{dy}{dt} = f(t, y)$ that is difficult to solve, and we want to solve an initial value problem with the initial condition $y(t_0) = y_0$, then we can sometimes use Euler's Method to create a piecewise linear function that approximates the solution.
- The Method of Successive Approximations: If we have a first order differential equation $\frac{dy}{dt} = f(t, y)$ and we want to solve the initial value problem with the initial conditions $y(t_0) = y_0$, then if both $f$ and $\frac{\partial f}{\partial y}$ are both continuous on an interval $I$ such that $t_0 \in I$, then we can use successive approximation functions $\phi_0, \phi_1, ..., \phi_n, ...$ to approximate the unique solution $y = \phi(t)$ of this problem. We set $\phi_0(t) = 0$, and we take $\phi_n(t) = \int_0^t f(s, \phi_{n-1}(s)) \: ds$. If at some $k$ we have that $\phi_k (t) = \phi_{k+1}(t)$, then $y = \phi_k(t)$ is the solution to our differential equation. Otherwise the solution to our differential equation can be obtained as the limit as $n \to \infty$ of the sequence of approximation functions $\{ \phi_n \}$. Oftentimes our solutions will be infinite series unless we can more compactly express the infinite series as a combination of elementary functions.
\begin{align} \quad \phi(t) = \lim_{n \to \infty} \phi_n = \lim_{n \to \infty} \int_0^t f(s, \phi_{n-1}(s)) \: ds \end{align}
We will also comment on the existence of solutions for linear first order differential equations and general first order differential equations. Be sure to remember the following two theorems:
Theorem (Existence/Uniqueness of Linear First Order Differential Equations): Let $p$ and $g$ be continuous functions on the open interval $I = ( \alpha, \beta)$, and let $t_0 \in (\alpha, \beta)$. Then for each $t \in I$ there exists a unique solution $y = \phi (t)$ to the differential equation $\frac{dy}{dt} + p(t) y = g(t)$ that also satisfies the initial value condition that $y(t_0) = y_0$. |
Theorem (Existence/Uniqueness of General First Order Differential Equations): Let $f = f(t, y)$ and $\frac{\partial}{\partial y} f(t, y)$ be continuous functions on an open rectangle $I = (\alpha, \beta) \times (\gamma, \delta) = \{ (t, y) : \alpha < t < \beta , \gamma < y < \delta \}$ and let $(t_0, y_0) \in I$. Then for some interval $(t_0 - h, t_0 + h) \subseteq (\alpha, \beta)$ there exists a unique solution $y = \phi (t)$ to the differential equation $\frac{dy}{dt} = f(t, y)$ and satisfies the initial value condition $y(t_0) = y_0$. |
The continuity of $f$ alone guarantees us a solution to the initial value problem to the differential equation $\frac{dy}{dt} = f(t, y)$ with the initial condition $y(t_0) = y_0$, and the continuity of $f$ paired with the continuity of $\frac{\partial f}{\partial y}$ guarantees us a unique solution.