I know that for unsigned integers, I can replace the modulo operation with a bitmask, if the divisor is a power of two. Do any numbers have a similar property for floats? That is, are there any numbers n
for which f mod n
can be calculated more efficiently than the in general case, not necessarily using a bitmask?
Other than, of course, one. Brain faliure
Edit: to clarify, f
is any floating point number (determined at runtime),
n
is any compile-time constant number in any format and I expect the result to be a float.
If n == 1.0
or n == -1.0
, then you can do:
r = f - trunc(f);
On x86_64, trunc
will typically use the ROUNDSD
instruction, so this will be pretty fast.
If n
is a power of 2 with magnitude greater than or equal to 1, and your platform has a fma
function that is native (for Intel, this means Haswell or newer), then you could do
r = fma(-trunc(f / n), n, f);
Any reasonable compiler should switch the division to a multiplication, and fold the negation into the appropriate FMA (or the constant), resulting in a multiplication, a truncation and an FMA.
This can also work for smaller powers of 2, as long as the result doesn't overflow (so the compiler wouldn't be free to substitute it).
Whether any compilers will actually do this is another matter. Floating point remainder functions aren't used much, and don't get much attention from compiler writers, e.g. https://bugs.llvm.org/show_bug.cgi?id=3359