I'm trying to subtract two unsigned ints and compare the result to a signed int (or a literal). When using unsigned int
types the behavior is as expected. When using uint16_t
(from stdint.h
) types the behavior is not what I would expect. The comparison was done using gcc 4.5.
Given the following code:
unsigned int a;
unsigned int b;
a = 5;
b = 20;
printf("%u\n", (a-b) < 10);
The output is 0, which is what I expected. Both a and b are unsigned, and b is larger than a, so the result is a large unsigned number which is greater than 10. Now if I change a and b to type uint16_t:
uint16_t a;
uint16_t b;
a = 5;
b = 20;
printf("%u\n", (a-b) < 10);
The output is 1. Why is this? Is the result of subtraction between two uint16_t types stored in an int in gcc? If I change the 10
to 10U
the output is again 0, which seems to support this (if the subtraction result is stored as an int and the comparison is made against an unsigned int than the subtraction results will be converted to an unsigned int).
Because calculations are not done with types below int / unsigned int (char, short, unsigned short etc; but not long, unsigned long etc), but they are first promoted to one of int or unsigned int. "uint16_t" is possibly "unsigned short" on your implementation, which is promoted to "int" on your implementation. So the result of that calculation then is "-15", which is smaller than 10.
On older implementations that calculate with 16bit, "int" may not be able to represent all values of "unsigned short" because both have the same bitwidth. Such implementations must promote "unsigned short" to "unsigned int". On such implementations, your comparison results in "0".