Range and Accuracy
Range is defined as the difference between the largest and smallest numbers that can be represented in a given format.
- The largest (positive) number has $0$ as its sign bit, while both the exponent and the significand are populated with just $1$s.
- The smallest positive number has $0$ as its sign bit, while the exponent is populated with all $0$s, and the significand is populated with $0$s, too, except for its leftmost bit, which must be $1$ (in its normalized form.)
- The smallest negative number has $1$ as its sign bit, while both the exponent and the significand are populated with just $1$s. (It's literally the negative of the largest positive number.)
Accuracy tells how close a number is to another number.
Unfortunately, because computers can store only a limited amount of data (our disks don't have infinite capacities,) this means that we can't represent each and every number out there in the universe. For example, you can't store the number $\pi \approx 3.14159265\dots$, which has an infinite number of digits after the decimal point.