I am trying to determine the double
machine epsilon in Java, using the definition of it being the smallest representable double
value x
such that 1.0 + x != 1.0
, just as in C/C++. According to wikipedia, this machine epsilon is equal to 2^-52
(with 52 being the number of double
mantissa bits - 1).
My implementation uses the Math.ulp()
function:
double eps = Math.ulp(1.0);
System.out.println("eps = " + eps);
System.out.println("eps == 2^-52? " + (eps == Math.pow(2, -52)));
and the results are what I expected:
eps = 2.220446049250313E-16
eps == 2^-52? true
So far, so good. However, if I check that the given eps
is indeed the smallest x
such that 1.0 + x != 1.0
, there seems to be a smaller one, aka the previous double
value according to Math.nextAfter()
:
double epsPred = Math.nextAfter(eps, Double.NEGATIVE_INFINITY);
System.out.println("epsPred = " + epsPred);
System.out.println("epsPred < eps? " + (epsPred < eps));
System.out.println("1.0 + epsPred == 1.0? " + (1.0 + epsPred == 1.0));
Which yields:
epsPred = 2.2204460492503128E-16
epsPred < eps? true
1.0 + epsPred == 1.0? false
As we see, we have a smaller than machine epsilon such which, added to 1, yields not 1, in contradiction to the definition.
So what is wrong with the commonly accepted value for machine epsilon according to this definition? Or did I miss something? I suspect another esoteric aspect of floating-point maths, but I can't see where I went wrong...
EDIT: Thanks to the commenters, I finally got it. I actually used the wrong definition! eps = Math.ulp(1.0)
computes the distance to the smallest representable double > 1.0
, but -- and that's the point -- that eps
is not the smallest x
with 1.0 + x != 1.0
, but rather about twice that value: Adding 1.0 + Math.nextAfter(eps/2)
is rounded up to 1.0 + eps
.
using the definition of it being the smallest representable double value x such that 1.0 + x != 1.0, just as in C/C++
This has never been the definition, not in Java and not in C and not in C++.
The definition is that the machine epsilon is the distance between one and the smallest float/double larger than one.
Your “definition” is wrong by a factor of nearly 2.
Also, the absence of strictfp
only allows a larger exponent range and should not have any impact on the empirical measurement of epsilon, since that is computed from 1.0
and its successor, each of which and the difference of which can be represented with the standard exponent range.