AFAIK c++ guarantees that in a expression a-expr && b-expr
b-expr
is not evaluated if a-expr
is false
. (Similar is true for or-expressions)
does that mean that there will be a jump in the generated code that will flush the cpu-pipeline? Or how can/is this be prevented?
what if the expressions are so simple that stupiditly calculating all the results would be faster than the short-circuit? Can this be detected/exploited by compilers?
Or do the programmer have to seperate all the expressions into single variable assignments and maybe use bitwise instead logic operators to prevent such jumps?
Compilers can convert &&
to &
and ||
to |
on bool
when the cost of the branch and the possible branch misprediction is greater than the cost of doing the "wasted" computation on any modern hardware, this is only done when the compiler can tell that evaluating the "wasted" branch has no side-effect. (following the As-If rule).
take the following two functions
int foo(int a, int b)
{
if (a == 1 & b == 2)
{
return 1;
}
return 0;
}
int bar(int a, int b)
{
if (a == 1 && b == 2)
{
return 1;
}
return 0;
}
Both gcc
and clang
produce the same code for both functions at -O1
which doesn't include a branch , while MSVC seems to still produce a branch even at O2
godbolt demo
this optimization is only possible if the compiler can verify that this operation has no side-effect, if you are calling a library function that isn't inlined then this optimization might not happen, in which case it would make sense to either inline small functions in the header, or manually use bitwise operations instead of logical operations.