I am trying to understand the following code:
#include <stdio.h>
int main()
{
int a=2147483647,b=1;
printf("%d\n",a+b);
printf("%d\n",a+b>a);
return 0;
}
Since a is the maximum integer, so a+b should be negative. But the last line print out "True".
If I change the second print out to printf("%d\n",2147483647+1>2147483647); Their will be an warning and print out false which is what I expected. Is their any setting in the compiler leads to this result?
Integer overflow on signed
types is Undefined Behaviour (UB) and the compiler is free to do what it wants to do - this is exactly what you observe. The compiler is not even comparing the numbers.
.LC0:
.string "%d\n"
main:
push rax
mov esi, -2147483648
mov edi, OFFSET FLAT:.LC0
xor eax, eax
call printf
mov esi, 1
mov edi, OFFSET FLAT:.LC0
xor eax, eax
call printf
xor eax, eax
pop rdx
ret
UB meand that the program behaviour is undefined ie it can be predicted from the C point of view.
Using GCC you can force the arithmetic and see what your hardware will do with it (but it is still undefined in C language):
int main()
{
volatile int a=INT_MAX,b=1;
printf("%d\n",a+b);
printf("%d\n",(a+b)>a);
return 0;
}