Since C++11, we are able to do floating point math at compile time. C++23 and C++26 added constexpr
to some functions, but not to all.
constexpr
floating point math is weird in general, because the results aren't perfectly accurate. However, constexpr
code is supposed to always deliver consistent results. How does C++ approach this issue?
constexpr
floating point math work?
constexpr
, but others not (like std::nearbyint
)C++ imposes very few restrictions on the behavior of float
and other floating-point types. This can lead to possible inconsistencies in the results, both between compilers, and between runtime/compile-time evaluation by the same compiler. Here is the tl;dr on it:
At runtime | In constant expressions | |
---|---|---|
Floating-point errors, like division by zero | UB, but compilers may support silent errors through NaN as an extension |
UB in a constant expression results in a compiler error |
Rounded operations, like 10.0 / 3.0 |
Rounding mode controlled through floating-point environment; results may vary |
Rounding is implementation-defined, results can differ from runtime |
Semantics changes through -ffast-math and other compiler optimizations |
Results can become less precise or more precise as a result; IEEE-754 conformance is broken |
No effect in practice; at most implementation-defined effect |
Calls to math functions | Same handling of errors and rounding as arithmetic with + and * |
Some constexpr since C++23,some constexpr since C++26,with some errors disallowed at compile-time |
Some operations can fail, such as division by zero. The C++ standard says:
If the second operand of / or % is zero the behavior is undefined.
In constant expressions, this is respected, and so it's not possible to produce NaN through operations or raise FE_DIVBYZERO
at compile time.
No exception is made for floating point numbers. However, when std::numeric_limits<float>::is_iec559()
is true
, most compilers will have IEEE-754 compliance as an extension. For example, division by zero is allowed and produces infinity or NaN depending on the operands.
C++ has always allowed differences between compile-time results and runtime results. For example, you can evaluate:
double x = 10.0f / 3.0;
constexpr double y = 10.0 / 3.0;
assert(x == y); // might fail
The result might not always be the same, because the floating point environment can only be changed at runtime, and thus the rounding mode can be altered.
C++'s approach is to make the effect of the floating point environment implementation-defined. It gives you no portable way to control it (and thus rounding) in constant expressions.
If the [
FENVC_ACCESS
] pragma is used to enable control over the floating-point environment, this document does not specify the effect on floating-point evaluation in constant expressions.
Firstly, compilers can be eager to optimize your code, even if it changes its meaning. For example, GCC will optimize away this call:
// No call to sqrt thanks to constant folding.
// This ignores the fact that this is a runtime evaluation, and would normally be impacted
// by the floating point environment at runtime.
const float x = std::sqrt(2);
The semantics change even more with flags like -ffast-math
which allows the compiler to reorder and optimize operations in a way that is not IEEE-754 compliant. For example:
float big() { return 1e20f;}
int main() {
std::cout << big() + 3.14f - big();
}
For IEEE-754 floating point numbers, addition and subtraction are not commutative. We cannot optimize this to: (big() - big()) + 3.14f
. The result will be 0
, because 3.14f
is too small to make any change to big()
when added, due to lack of precision. However, with -ffast-math
enabled, the result can be 3.14f
.
There can be runtime differences to constant expressions for all operations, and that includes calls made to mathematical functions. std::sqrt(2)
at compile-time might not be the same as std::sqrt(2)
at runtime. However, this issue is not unique to math functions. You can put these functions into the following categories:
constexpr
since C++23) [P05333r9]Some functions are completely independent of the floating-point environment, or they simply cannot fail, such as:
std::ceil
(round to next greater number)std::fmax
(maximum of two numbers)std::signbit
(obtains the sign bit of a floating-point number)Furthermore, there are functions like std::fma
which just combine two floating point operations. These are no more problematic than +
and *
at compile-time. The behavior is is the same as calling these math functions in C (see C23 Standard, Annex F.8.4), however, it is not a constant expression in C++ if exceptions other than FE_INEXACT
are raised, errno
is set, etc (see [library.c]/3).
constexpr
since C++26) [P1383r0]Other functions are dependent on the floating point environment, such as std::sqrt
or std::sin
. However, this dependence is called weak, because it's not explicitly stated, and it only exists because floating-point math is inherently imprecise.
It would be arbitrary to allow +
and *
at compile-time, but not math functions which have the exact same issues.
constexpr
yet, possibly in the future)[P1383r0] deemed it too ambitious to add constexpr
to for mathematical special functions, such as:
std::beta
std::riemann_zeta
constexpr
yet, possibly never)Some functions like std::nearbyint
are explicitly stated to use the current rounding mode in the standard.
This is problematic, because you cannot control the floating-point environment at compile time using standard means.
Functions like std::nearbyint
aren't constexpr
, and possibly never will be.
In summary, there are many challenges facing the standard committee and compiler developers when dealing with constexpr
math. It has taken decades of discussion to lift some restrictions on constexpr
math functions, but we are finally here. The restrictions have ranged from arbitrary in the case of std::fabs
, to necessary in the case of std::nearbyint
.
We are likely to see further restrictions lifted in the future, at least for mathematical special functions.