I'm having trouble understanding C's rules for what precision to assume when printing doubles, or when converting strings to doubles. The following program should illustrate my point:
#include <errno.h>
#include <float.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main(int argc, char **argv) {
double x, y;
const char *s = "1e-310";
/* Should print zero */
x = DBL_MIN/100.;
printf("DBL_MIN = %e, x = %e\n", DBL_MIN, x);
/* Trying to read in floating point number smaller than DBL_MIN gives an error */
y = strtod(s, NULL);
if(errno != 0)
printf(" Error converting '%s': %s\n", s, strerror(errno));
printf("y = %e\n", y);
return 0;
}
The output I get when I compile and run this program (on a Core 2 Duo with gcc 4.5.2) is:
DBL_MIN = 2.225074e-308, x = 2.225074e-310
Error converting '1e-310': Numerical result out of range
y = 1.000000e-310
My questions are:
Thanks for any help you can give. I will try to clarify the issue as I get feedback.
Because of the existence of denormal numbers in the IEEE-754 standard. DBL_MIN
is the smallest normalised value.
Because the standard says so (C99 7.20.1.3):
If the result underflows (7.12.1), the functions return a value whose magnitude is no greater than the smallest normalized positive number in the return type; whether errno acquires the value ERANGE is implementation-defined.
Returning the "correct" value (i.e. 1e-310) obeys the above constraint.
So not a bug. This is technically platform-dependent, because the C standard(s) place no requirements on the existence or behaviour of denormal numbers (AFAIK).