Error Analysis in Solutions to Systems of Linear Equations

# Error Analysis in Solutions to Systems of Linear Equations

Recall that if we are trying to solve the system $Ax = b$ where $A$ is an $n \times n$ matrix, then if we using a program to solve this system will yield an approximation $\hat{x}$ of the actual solution $x$. We defined the residual to be $r = b - A\hat{x}$ and the error of $\hat{x}$ of $x$ to be $e = x - \hat{x}$. We will now look more into analyzing the errors of our approximate solutions using the vector and matrix norm definitions that we recently saw.

First we will let $\hat{b} = b - r$ be an approximation to $b$. We will want to consider the solution $\hat{x}$ of the following system:

(1)
\begin{align} A\hat{x} = \hat{b} = (b - r) \end{align}

In doing so, we will be able to examine how sensitivity of the system. If we can make minor changes to $b$ by using $\hat{b} = b - r$, then we will want to examine what sort of changes we obtain in our approximation $\hat{x}$ of $x$. If small changes in $b$ result in large changes of our approximation $\hat{x}$ of $x$, then our system $Ax = b$ is sensitive. If small changes in $b$ result in small changes of $\hat{x}$ of $x$, then our system is resilient.

Suppose that we give a single numerical value to the error between $\hat{x}$ and $x$. Using the infinity norm, let this error be:

(2)
\begin{align} \quad \| x - \hat{x} \|_{\infty} = \max_{1≤i≤n} \mid x_i - \hat{x_i} \mid \end{align}

Note that this norm simply represents the largest error between the corresponding entries of the true entries $x_i$ and the approximated computed entries $\hat{x_i}$.

Now the following Theorem gives us an inequality that relates the relative error of $\hat{x}$ to the relative error in $\hat{b}$.

 Theorem 1: If $A$ is an $n \times n$ invertible matrix, then the solutions of both $Ax = b$ and $A \hat{x} = \hat{b}$ satisfy $\frac{\| x - \hat{x} \|_{\infty}}{\| x \|_{\infty}} ≤ \| A \|_{\infty} \| A^{-1} \|_{\infty} \frac{\| b - \hat{b} \|_{\infty}}{\| b \|_{\infty}}$.
• Proof: Let $Ax = b$ and $A \hat{x} = \hat{b}$. First subtract both of the equations to get that:
(3)
\begin{align} \quad Ax - A\hat{x} = b - \hat{b} \\ \quad A(x - \hat{x}) = b - \hat{b} \end{align}
• Now since $A$ is invertible, then we can multiply both sides of the equation above by $A^{-1}$:
(4)
\begin{align} \quad A^{-1}A(x - \hat{x}) = A^{-1}(b - \hat{b}) \\ \quad x - \hat{x} = A^{-1}(b - \hat{b}) \end{align}
• We now take the infinity vector norm of both sides of the equation above:
(5)
\begin{align} \quad \| x - \hat{x} \|_{\infty} = \| A^{-1}(b - \hat{b}) \|_{\infty} \\ \quad \| x - \hat{x} \|_{\infty} ≤ \| A^{-1} \|_{\infty} \| b - \hat{b} \|_{\infty} \end{align}
• We will now multiply the righthand side of the equation above by $\frac{\| A \|_{\infty}}{\| A \|_{\infty}}$, and divide both sides by $\| x \|_{\infty}$.
(6)
\begin{align} \quad \frac{\| x - \hat{x} \|_{\infty}}{\| x \|_{\infty}} ≤ \| A \|_{\infty} \| A^{-1} \|_{\infty} \frac{\| b - \hat{b} \|_{\infty}}{\| A \|_{\infty} \| x \|_{\infty}} \end{align}
• Lastly note that $\| A \|_{\infty} \| x \|_{\infty} ≥ \| Ax \|_{\infty} = \| b \|_{\infty}$ and so $\frac{1}{\| A \|_{\infty}}{\| x \|_{\infty}} ≤ \frac{1}{\| b \|_{\infty}}$. Plugging this into the inequality above and we have that:
(7)
\begin{align} \quad \frac{\| x - \hat{x} \|_{\infty}}{\| x \|_{\infty}} ≤ \| A \|_{\infty} \| A^{-1} \|_{\infty} \frac{\| b - \hat{b} \|_{\infty}}{\| b \|_{\infty}} \quad \blacksquare \end{align}

Theorem 1 holds if we use different vector norms instead since the proof of Theorem 1 solely relies on the properties of vector and matrix norms.

Another way that we can reformulate the inequality in Theorem 1 is as follows:

(8)
\begin{align} \quad \mathrm{Rel} (\hat{x}) ≤ \| A \|_{\infty} \| A^{-1} \|_{\infty} \mathrm{Rel} (\hat{b}) \end{align}

Theorem 1 offers insight in the relationship between the relative errors of $\hat{x}$ from $x$ and $\hat{b}$ from $b$. Note that if $\| A \|_{\infty} \| A^{-1} \|_{\infty}$ is very large, then the relative error of $\hat{x}$ from $x$ may be much larger than the relative error in $\hat{b}$ from $b$. The value of $\| A \|_{\infty} \| A^{-1} \|_{\infty}$ is very important and we define it below.

 Definition: If $A$ is an $n \times n$ invertible matrix, then the Conditional Number of $A$ is $\mathrm{cond} (A) = \| A \|_{\infty} \| A^{-1} \|_{\infty}$. If $\mathrm{cond} (A)$ is very large then the system $Ax = b$ is said to be Ill-Conditiioned while if $\mathrm{cond}(A)$ is very small, then the system $Ax = b$ is said to be Well-Conditioned.

The condition number is sometimes denoted $\kappa (A) = \| A \|_{\infty} \| A^{-1} \|_{\infty}$.

Now the following Theorem tells us that if small changes are made to $A$ instead of to $b$, then the relative error between $\hat{x}$ and $x$ can be related to the relative error between $\hat{A}$ and $A$.

 Theorem 2: Let $Ax = b$ and $\hat{A} \hat{x} = b$. If $\| A - \hat{A} \|_{\infty} ≤ \frac{1}{\| A^{-1} \|_{\infty}}$ then $\frac{\| x - \hat{x} \|_{\infty}}{\| x \|_{\infty}} ≤ \frac{\mathrm{cond}(A)}{1 - \mathrm{cond}(A) \frac{\| A - \hat{A} \|_{\infty}}{\| A \|_{\infty}}} \frac{\| A - \hat{A}\|_{\infty}}{\| A \|_{\infty}}$.