In Java, I need to compute (int)Math.exp(x)
in a platform independent way. To achieve platform independency, I have to use StrictMath instead: (int)StrictMath.exp(x)
. Unfortunately, my measurements have shown that StrictMath.exp
is significantly slower than Math.exp
.
Therefore, I had the idea to compute y = Math.exp(x)
first. The interface of Math.exp
states that
The computed result must be within 1 ulp of the exact result. Results must be semi-monotonic.
So, if (int)(Math.nextDown(y)) == (int)(Math.nextUp(y))
, I can use (int)y
as result. Only in the very rare case that (int)(StrictMath.nextDown(y)) != (int)(StrictMath.nextUp(y))
(the result is expected to be in the int
range), I need to additionally evaluate (int)StrictMath.exp(x)
.
This strategy would be correct if StrictMath.exp
had the same error guarantees as Math.exp
.
Unfortunately, a corresponding statement is missing in the StrictMath.exp
interface. The javadoc of StrictMath just says
To help ensure portability of Java programs, the definitions of some of the numeric functions in this package require that they produce the same results as certain published algorithms. These algorithms are available from the well-known network library netlib as the package "Freely Distributable Math Library," fdlibm. These algorithms, which are written in the C programming language, are then to be understood as executed with all floating-point operations following the rules of Java floating-point arithmetic.
The Java math library is defined with respect to fdlibm version 5.3. Where fdlibm provides more than one definition for a function (such as acos), use the "IEEE 754 core function" version (residing in a file whose name begins with the letter e). The methods which require fdlibm semantics are sin, cos, tan, asin, acos, atan, exp, log, log10, cbrt, atan2, pow, sinh, cosh, tanh, hypot, expm1, and log1p.
Furthermore, the interface of the exp function in the fdlibm library says that
Accuracy: according to an error analysis, the error is always less than 1 ulp (unit in the last place).
Does the combination of all this information really imply that StrictMath.exp
has the same error guarantee as Math.exp
? Only a 100% guarantee would allow me to do the optimization described above.
Does the combination of all this information really imply that
StrictMath.exp
has the same error guarantee asMath.exp
?
On the face of it1, the error guarantee is the same; i.e. it is less than 1 ulp. But that isn't the same as saying that the error is the same!
The point of using StrictMath
was not to guarantee that the error was the absolute minimum possible. Rather, the point was to guarantee reproducible results, independent of programming language, hardware implementation and so on.
And the flip-side is that the Math
methods are (still2) not specified to produce bit-for-bit identical results to their StrictMath
equivalents. As the javadoc in Java 17 states:
"Unlike some of the numeric methods of class
StrictMath
, all implementations of the equivalent functions of classMath
are not defined to return the bit-for-bit same results. This relaxation permits better-performing implementations where strict reproducibility is not required."
However, I have not yet found anything that states the equivalence of
Math
andStrictMath
.
I don't think they are guaranteed to be equivalent: not in any version of Java. Indeed, that would contradict the above javadoc quote.
1 - This is based on the text that you quoted in your question.
2 - While all floating point arithmetic operations in Java 17 and later have strictfp
semantics (see JEP 306), this doesn't automatically extend to Math
.