For specifics I am talking about x87 PC architecture and the C compiler.
I am writing my own interpreter and the reasoning behind the double
datatype confuses me. Especially where efficiency is concerned. Could someone explain WHY C has decided on a 64-bit double
and not the hardware native 80-bit double
? And why has the hardware settled on an 80-bit double
, since that is not aligned? What are the performance implications of each? I would like to use an 80-bit double
for my default numeric type. But the choices of the compiler developers make me concerned that this is not the best choice.
double
on x86 is only 2 bytes shorter, why doesn't the compiler use the 10 byte long double
by default?long double
vs double
?long double
by default?long double
on typical x86/x64 PC hardware?The answer, according to Mysticial, is that Microsoft uses SSE2 for its double
data-type. The Floating point unit (FPU) x87 is seen as outdated and slow in comparison to modern CPU extensions. SSE2 does not support 80-bit, hence the compiler's choice of 64-bit precision.
On 32-bit x86 architecture, since all CPUs don't have SSE2 yet, Microsoft still uses the floating point unit (FPU) x87 unless the compiler switch /arch:SSE2
is given. Which makes the code incompatible with those older? CPUs.