Can someone please explain why:
double d = 1.0e+300;
printf("%d\n", d == 1.0e+300);
Prints "1" as expected on a 64-bit machine, but "0" on a 32-bit machine? (I got this using GCC 6.3 on Fedora 25)
To my best knowledge, floating point literals are of type double
and there is no type conversion happening.
Update: This only occurs when using the -std=c99
flag.
The C standard allows to silently propagate floating-point constant to long double
precision in some expressions (notice: precision, not the type). The corresponding macro is FLT_EVAL_METHOD
, defined in <float.h>
since C99.
As by C11 (N1570), §5.2.4.2.2, the semantic of value 2
is:
evaluate all operations and constants to the range and precision of the
long double
type.
From the technical viewpoint, on x86 architecture (32-bit) GCC compiles the given code into FPU instructions using x87 with 80-bit stack registers, while for x86-64 architecture (64-bit) it preffers SSE unit (as scalars within XMM registers).
The current implementation was introduced in GCC 4.5 along with -fexcess-precision=standard
option. From the GCC 4.5 release notes:
GCC now supports handling floating-point excess precision arising from use of the x87 floating-point unit in a way that conforms to ISO C99. This is enabled with
-fexcess-precision=standard
and with standards conformance options such as-std=c99
, and may be disabled using-fexcess-precision=fast
.