With Google's newfound inability to do math correctly (check it! according to Google 500,000,000,000,002 - 500,000,000,000,001 = 0
), I figured I'd try the following in C to run a little theory.
int main()
{
char* a = "399999999999999";
char* b = "399999999999998";
float da = atof(a);
float db = atof(b);
printf("%s - %s = %f\n", a, b, da-db);
a = "500000000000002";
b = "500000000000001";
da = atof(a);
db = atof(b);
printf("%s - %s = %f\n", a, b, da-db);
}
When you run this program, you get the following
399999999999999 - 399999999999998 = 0.000000
500000000000002 - 500000000000001 = 0.000000
It would seem like Google is using simple 32 bit floating precision (the error here), if you switch float for double in the above code, you fix the issue! Could this be it?
In C#, try (double.maxvalue == (double.maxvalue - 100))
, you'll get true
but that's what it is supposed to be.
Thinking about it, you have 64 bit representing a number greater than 2^64
(double.maxvalue
), so inaccuracy is expected.