Signed int arithmetic operations can overflow and underflow, and when that happens, that is undefined behavior as per the C++ standard (and C standard). At which point the program can be expected to do potentially do anything.
I've noticed many questions on SO where undefined behavior occurs and the program behaves in unexpected but deterministic manner, eg:
C output changed by adding a printf
In the context of signed int overflow, how does the compiler generate code such that when an overflow occurs it intentionally behaves weirdly? Is there a cmp injected to determine if some flag is set post an iadd
or something?
Surely the compiler isn't intentionally generating code that checks for overflow and intentionally has another code path be executed?
I'm probably missing something here, but any explanation about this would be great to have.
In the example you link, the compiler is optimizing by assuming undefined behavior can't happen. In that question, overflow is reported only if you added two negative numbers and produced a positive, or two positives produce a negative. Since neither behavior can occur without being the result of an undefined behavior (signed integer overflow), it simply assumes the if
test can never pass, and eliminates that code path entirely from the compiled code, leaving only the code for the else
path.
So it's not doing any additional work at all at runtime to achieve this result. It's identifying a way to improve performance, the same as it would if it saw:
if (0) {
x();
} else {
y();
}
and eliminated the if (0)
path, compiling code that unconditionally called y();
. It's not required to produce code that responds correctly in the presence of undefined behavior, so it just pretends it can't happen and optimizes accordingly. No code checks for the undefined behavior, no extra work is done, it just produces simple code that behaves incorrectly because the undefined behavior it assumed couldn't happen, actually did.