Code
char a;
a = 0xf1;
printf("%x\n", a);
Output
fffffff1
printf()
show 4 bytes, that exactly we have one byte in a
.
What is the reason of this misbehavior?
How can I correct it?
What is the reason of this misbehavior?
This question looks strangely similar to another I have answered; it even contains a similar value (0xfffffff1
). In that answer, I provide some information required to understand what conversion happens when you pass a small value (such as a char
) to a variadic function such as printf
. There's no point repeating that information here.
If you inspect CHAR_MIN
and CHAR_MAX
from <limits.h>
, you're likely to find that your char
type is signed, and so 0xf1
does not fit as an integer value inside of a char
.
Instead, it ends up being converted in an implementation-defined manner, which for the majority of us means it's likely to end up with one of the high-order bits becoming the sign bit. When these values are promoted to int
(in order to pass to printf
), sign extension occurs to preserve the value (that is, a char
that has a value of -1 should be converted to an int that has a value of -1 as an int
, so too is the underlying representation for your example likely to be transformed from 0xf1
to 0xfffffff1
).
printf("CHAR_MIN .. CHAR_MAX: %d .. %d\n", CHAR_MIN, CHAR_MAX);
printf("Does %d fit? %s\n", '\xFF', '\xFF' >= CHAR_MIN && '\xFF' <= CHAR_MAX ? "Yes!"
: "No!");
printf("%d %X\n", (char) -1, (char) -1); // Both of these get converted to int
printf("%d %X\n", -1, -1); // ... and so are equivalent to these
How can i correct it?
Declare a
with a type that can fit the value 0xf1
, for example int
or unsigned char
.