The Residual Correction Method
Recall that if we use a program to compute the solution $x$ to a system of linear equations $Ax = b$ where $A$ is an $n \times n$ matrix, then our computed solution $\hat{x}$ is an approximation to the actual solution $x$. We defined the residual to be $r = b - A \hat{x}$ and the error $e$ of $\hat{x}$ of $x$ to be $x - \hat{x}$.
We also saw that the error $e$ was the solution to the system $Ae = r$. Unfortunately, we also noted that computing the actual error is our approximation $\hat{x}$ has some difficulties. With regards to solving $Ae = r$, we noted that the same style of errors obtained from solving $Ax = b$ would manifest in solving $Ae = r$.
We will now look at an iterative method to improve upon our error estimations known as The Residual Correction Method. This method uses residuals in order to reduce the error between $\hat{x}$ and $x$ with each iteration applied.
Let $x^{(0)} = \hat{x}$, that is, let $x^{(0)}$ be the initial computed value for $x$ in solving the system $Ax = b$. We can then define a residual as:
(1)We then solve for $e^{(0}$ (note that $e^{(0)}$ is the actual error between $\hat{x}$ and $x$). In solving for $e^{(0)}$, we obtain an approximation of $e^{(0)}$, denote it $\hat{e}^{(0)} \approx e^{(0)}$. We then use this value $\hat{e}^{(0)}$ to improve our approximation. Let:
(2)Then we can define another residual as:
(3)We then solve for $e^{(1)}$ as a better approximation of the actual error $e^{(0)}$ to get our error approximation $\hat{e}^{(1)}$. We can then use this value $\hat{e}^(1)$ to improve our approximation by letting:
(4)We can of course continue this process until a desired level os accuracy is reached.