How do I calculate the min/max decimal numbers that could be represented in binary 16, 32, 64 IEEE 754 floating point?
The NORMAL ranges are:
If you allow for DENORMALS as well, then minumum values are:
Always keep in mind that just because a number is in this range doesn't mean it can be exactly represented. At any range, floating-point numbers necessarily skip values due to cardinality reasons. The classic example is 1/3
which has no exact representation in any finite precision, for binary or decimal formats. In general you can only precisely represent those numbers that are called "dyadic" for the binary format, i.e., those of the form A/2^B for some A and B; provided the result falls into the dynamic range.