# Accuracy of Floating Point Representations of Numbers

We will now look at two ways to measure the accuracy of a floating point representation of a number.

## The Machine Epsilon

Definition: The Machine Epsilon of a floating point number is the difference between the unit $1$ and the next larger number that can be stored in such a format. |

Recall from the Storage of Numbers in IEEE Single-Precision Floating Point Format page that each floating point binary number in the IEEE Single-Precision Floating Point Format can be stored in 32 bits where a number $x = 1.a_1a_2...a_{22}a_{23}$ can be stored precisely in the bits $b_{10}b_{11}...b_{31}b_{32}$. Therefore, $1$ can be represented as $1.00000000000000000000000$ ($23$ zeroes suceceeding $1$) and the next larger number that can be stored in this format would have a $1$ appear in b_{32} $]], that is $1.00000000000000000000001$. Thus we have that the difference can be calculated as:

(1)Therefore we have verified the following proposition.

Proposition 1: The machine epsilon of the IEEE Single-Precision Floating Point Format is $2^{-23}$, that is, the difference between $1$ and the next larger number that can be stored in this format is $+ 2^{23}$ larger than $1$. |

While computers utilize binary exceptionally well, it is often not practical to obtain a binary number as a result of a calculation. Thus, we note that $2^{-23} \approx 1.19 \cdot 10^{-7} = 0.000000119$, and so the machine epsilon of the IEEE Single-Precision Floating Point Format is approximately $0.000000119$. Note that if the number $x$ contains $7$ digits or less, then $x$ is represented with full accuracy. However, for numbers containing more than $7$ digits, we begin to have less accuracy - albiet not a significant amount for larger calculations, but on the scale of small calculations, this can be problematic and we would instead utilize more accurate formats.

# Upper Bounds on Storing Float Point Numbers

Another way to measure the accuracy of storing floating point numbers is in finding an upper bound for which a floating point number can be stored, that is, finding an integer $M$ such that all floating point integers $x$ such that $0 ≤ x ≤ M$ can be stored precisely in such a format.

Note that for the IEEE Single-Precision Floating Point Format that the significand $\bar{x}$ of any integer $x$ can be stored to contain $24$ binary digits since the significand is represented as $\bar{x} = 1.a_1a_2...a_{22}a_{23}$. Note that the largest number we can store contains $24$ $1$'s in base $2$, that is:

(2)Note that once again, all $7$ digit or less integers will store precisely, though, if $x$ contains more than $7$ digits then we may not be guaranteed full accuracy.