I have been playing with the following code snippet to understand constexpr
.
#include <stdlib.h>
///////////////////
bool runtimeIsPalindrome(const char* s, int len)
{
if(len < 2)
return true;
else
return s[0] == s[len-1] && runtimeIsPalindrome(&s[1], len-2);
}
///////////////////
constexpr bool compileTimeIsPalindrome(const char* s, int len)
{
return len < 2 ? true : s[0] == s[len-1] && compileTimeIsPalindrome(&s[1], len-2);
}
///////////////////
int main()
{
constexpr char c[] = "helloworlddlrowolleh";
for(size_t nn=0;nn<1e8; ++nn) {
// static_assert(compileTimeIsPalindrome(c, sizeof(c)-1 ), "Blah");
// compileTimeIsPalindrome(c, sizeof(c)-1 );
// runtimeIsPalindrome( c, sizeof(c)-1 );
}
}
With the runtimeIsPalindrome
version ...
clear; g++ -std=c++11 plaindrome.cpp; time ./a.out
real 0m8.333s
user 0m8.322s
sys 0m0.005s
With the compileTimeIsPalindrome
version ...
clear; g++ -std=c++11 plaindrome.cpp; time ./a.out
real 0m8.257s
user 0m8.247s
sys 0m0.004s
... but with the static_assert(compileTimeIsPalindrome
version I actually appear to observe some compile time magic ...
clear; g++ -std=c++11 plaindrome.cpp; time ./a.out
real 0m0.265s
user 0m0.263s
sys 0m0.001s
Why does compile-time evaluation only work when I try the assertion in this example?
Note: Profiling with any optimisation seems pointless for this example, as it appears the compiler spots that the result is constant irrespective of the function called in the loop giving similar timings to the fastest profile time above.
constexpr
doesn't guarantee compile time evaluation, unless used in a static_assert
, template argument or any other place where the value has to be known at compile time by the language rules.
The fibonacci series f(n) = f(n - 1) + f(n - 2), f(0) = f(1) = 1
is a great example for that. On my machine with gcc, for n <= 10
, this get's evaluated at compile time. For any other argument, the compiler is allowed to -- and indeed does -- decide that it is too computationally intensive and default to runtime evaluation.