Propagation of Error

# Propagation of Error

Suppose that we use a value that contains error within a calculation. The resulting calculation will thus contain error - the amount of error depending on the original error, the type of calculations done, and the number of calculations done. For example, suppose that we estimate the number of fish in a secluded pond to be $x_A = 512302$ while the true population of fish is $x_T = 514029$ (realistically, $x_T$ would actually be unknown). Clearly $x_A$ has some error associated with it.

Suppose that ten years later, we approximate the new population of fish to be $y_A = 640331$ while the true population of fish is $y_T = 650084$ (once again, assuming $x_T$ is actually unknown). Once again $y_A$ has some error associated with it.

If we wanted to calculate the rate of change in ten years from our approximations $x_A$ and $y_A$ is given as:

(1)
\begin{align} \quad \frac{y_A - x_A}{10} = \frac{650084 - 514029}{10} = 13605 \mathrm{\: fish \: per \: year} \end{align}

Note that both $x_A$ and $y_A$ has associated errors, and as a result, when we subtract the values in finding the rate of population change over a ten year period, the error will be carried through.

 Definition: Let $\omega$ be the operation of addition $+$, subtraction $-$, multiplication $\cdot$, or division $\div$. The Propagated Error $E$ of the approximated values $x_A, y_A \in \mathbb{R}$ to the true values $x_T, y_T \in \mathbb{R}$ is $E = (x_T \omega y_T) - (x_A \omega y_A)$.

Often times we wish to bound the propagated error $E$. For example, let $x_A = 1.123$ and let $y_A = 1.42$ be approximations of the true values $x_T, y_T \in \mathbb{R}$, and suppose that $x_A$ and $y_A$ have been rounded correctly to have $4$ and $3$ significant digits respectively.

Our approximation of $x_A + y_A$ of $x_T + y_T$ is hence given as:

(2)
\begin{align} \quad x_A + y_A = 1.123 + 1.42 = 2.543 \end{align}

Since $x_A$ is correct to $3$ significant digits of accuracy, we have that $\mid x_T - x_A \mid ≤ 0.0005$ and so $-0.0005 ≤ x_T - x_A ≤ 0.0005$. Thus:

(3)
\begin{align} \quad x_A - 0.0005 ≤ x_T ≤ x_A + 0.0005 \\ \quad 1.123 - 0.0005 ≤ x_T ≤ 1.123 + 0.0005 \\ \quad 1.1225 ≤ x_T ≤ 1.1235 \end{align}

Similarly, since $y_A$ is correct to $2$ significant digits of accuracy, we have that $\mid y_T - y_A \mid ≤ 0.005$ and so $-0.005 ≤ y_T - y_A ≤ 0.005$. Thus:

(4)
\begin{align} \quad y_A - 0.005 ≤ y_T ≤ y_A + 0.005 \\ \quad 1.42 - 0.005 ≤ y_T ≤ 1.42 + 0.005 \\ \quad 1.415 ≤ y_T ≤ 1.425 \end{align}

Combining both of these inequalities and we get that the sum $x_T + y_T$ is within:

(5)
\begin{align} \quad 1.1225 + 1.415 ≤ x_T + y_T ≤ 1.1235 + 1.425 \\ \quad 2.5375 ≤ x_T + y_T ≤ 2.5485 \end{align}

We can bound the propagation error $E = (x_T + y_T) - (x_A + y_A)$ by subtracting $x_A + y_A = 2.543$ from all of the terms in the last inequality, and thus:

(6)
\begin{align} \quad 2.5375 - 2.543 ≤ (x_T + y_T) - (x_A + y_A) ≤ 2.5485 - 2.543 \\ \quad -0.0055 ≤ (x_T + y_T) - (x_A + y_A) ≤ 0.0055 \\ \quad \mid (x_T + y_T) - (x_A + y_A) \mid = \mid E \mid ≤ 0.0055 \end{align}

Notice that $0.0055$ is the sum of the maximum errors of $x_A$ and $y_A$ from $x_T$ and $y_T$ respectively. In fact, this will always be the case.

We will now look at some formulas for calculating error propagation (for addition and subtraction) and relative error propagation (for multiplication and division).

## Error Propagation with Addition and Subtraction

 Proposition 1: Let $x_A, y_A \in \mathbb{R}$ be approximations of the true values $x_T, y_T \in \mathbb{R}$. Then the error of the sum/difference $x_A \pm y_A$ is given by the formula $\mathrm{Error} (x_A \pm y_A) = \mathrm{Error} (x_A) \pm \mathrm{Error} (y_A)$.
• Proof Let $x_T = x_A + \epsilon$ and $y_T = y_A + \eta$ where $\epsilon$ is the error of $x_A$ to $x_T$ and $\eta$ is the error of $y_A$ to $y_T$. Then $x_A = x_T - \epsilon$ and $y_A = y_T - \eta$. The error $\mathrm{Error} (x_A \pm y_A)$ is given by the following formulas:
(7)
\begin{align} \quad \mathrm{Error} (x_A + y_A) = (x_T + y_T) - (x_A + y_A) = (x_T + y_T) - \left [ (x_T - \epsilon) + (y_T - \eta) \right ] = \epsilon + \eta = \mathrm{Error} (x_A) + \mathrm{Error} (y_A) \end{align}
(8)
\begin{align} \quad \mathrm{Error} (x_A - y_A) = (x_T - y_T) - (x_A - y_A) = (x_T - y_T) - \left [ (x_T - \epsilon) - (y_T - \eta) \right ] = \epsilon - \eta = \mathrm{Error} (x_A) - \mathrm{Error} (y_A) \end{align}
• Thus we have obtained the desired result. $\blacksquare$

## Relative Error Propagation with Multiplication

 Proposition 2: Let $x_A, y_A \in \mathbb{R}$ be approximations of the true values $x_T, y_T \in \mathbb{R}$. Then the relative error of the product $x_A \cdot y_A$ is given by the formula $\mathrm{Rel} (x_A \cdot y_A) = \mathrm{Rel} (x_A) + \mathrm{Rel} (y_A) - \mathrm{Rel} (x_A) \mathrm{Rel} (y_A)$.
• Proof: Let $x_T = x_A + \epsilon$ and $y_T = y_A + \eta$ where $\epsilon$ is the error of $x_A$ to $x_T$ and $\eta$ is the error of $y_A$ to $y_T$. Then $x_A = x_T - \epsilon$ and $y_A = y_T - \eta$. The relative error $\mathrm{Rel} (x_A \cdot y_A)$ is given by the following formula:
(9)
\begin{align} \quad \mathrm{Rel} (x_A \cdot y_A) = \frac{x_T \cdot y_T - x_A \cdot x_T}{x_T \cdot y_T} = \frac{x_T \cdot y_T - (x_T - \epsilon)(y_T - \eta)}{x_T \cdot y_T} = \frac{x_T \cdot y_T - x_T \cdot y_T + \eta x_T + \epsilon y_T - \epsilon \eta}{x_T \cdot y_T} = \frac{\eta x_T + \epsilon y_T - \epsilon \eta}{x_T \cdot y_T} \\ = \frac{\eta}{y_T} + \frac{\epsilon}{x_T} - \left ( \frac{\epsilon}{x_T} \right ) \left ( \frac{\eta}{y_T} \right ) = \frac{y_T - y_A}{y_T} + \frac{x_T - x_A}{x_T} - \left ( \frac{x_T - x_A}{x_T} \right ) \left ( \frac{y_T - y_A}{y_T} \right ) = \mathrm{Rel} (x_A) + \mathrm{Rel} (y_A) - \mathrm{Rel} (x_A) \mathrm{Rel} (y_A) \end{align}
• Thus we have obtained the desired result. $\blacksquare$

## Relative Error Propagation with Division

 Proposition 3: Let $x_A, y_A \in \mathbb{R}$ be approximations of the true values $x_T, y_T \in \mathbb{R}$. Then the relative error of the quotient $\frac{x_A}{y_A}$ is given by the formula $\mathrm{Rel} \left ( \frac{x_A}{y_A} \right ) = \frac{\mathrm{Rel}(x_A) - \mathrm{Rel}(y_A)}{1 - \mathrm{Rel}(y_A)}$.
• Proof: Let $x_T = x_A + \epsilon$ and $y_T = y_A + \eta$ where $\epsilon$ is the error of $x_A$ to $x_T$ and $\eta$ is the error of $y_A$ to $y_T$. Then $x_A = x_T - \epsilon$ and $y_A = y_T - \eta$. The relative error $\mathrm{Rel} \left ( \frac{x_A}{y_A} \right )$ is given by the following formula:
(10)
\begin{align} \quad \mathrm{Rel} \left ( \frac{x_A}{y_A} \right ) = \frac{\frac{x_T}{y_T} - \frac{x_A}{y_A}}{\frac{x_T}{y_T}} = \frac{x_Ty_A - y_Tx_A}{x_Ty_A} = \frac{x_T(y_T - \eta) - y_T(x_T - \epsilon)}{x_T(y_T - \eta)} = \frac{x_Ty_T - \eta x_T - x_Ty_T + \epsilon y_T}{x_Ty_T - \eta x_T} \\ = \frac{\epsilon y_T - \eta x_T}{x_Ty_T - \eta x_T} = \frac{\frac{\epsilon}{x_T} - \frac{\eta}{y_T}}{1 - \frac{\eta}{y_T}} = \frac{\frac{x_T - x_A}{x_T} - \frac{y_T - y_A}{y_T}}{1 - \frac{y_T - y_A}{y_T}}= \frac{\mathrm{Rel} (x_A) - \mathrm{Rel}(y_A)}{1 - \mathrm{Rel} (y_A)} \end{align}
• Thus we have obtained the desired result. $\blacksquare$