I was wondering what happens when an 8bit value is compared against a 16bit value.
I'll try to explain the problem by a code example:
bool result;
unsigned char a_8bit = 0xcd;
unsigned short b_16bit = 0xabcd;
result = a_8bit < b_16bit;
Possible results can be:
Does anybody has a clue what the compiler will do with this piece of code? Sure, i can try it out, but are there different interpretations by different compilers of this code?
1 A prvalue of an integer type other than bool, char16_t, char32_t, or wchar_t whose integer conversion rank (4.13) is less than the rank of int can be converted to a prvalue of type int if int can represent all the values of the source type; otherwise, the source prvalue can be converted to a prvalue of type unsigned int. [§ 4.5]
So, compiler can promote both of them to unsigned int
and then do the comparison.