I have a question about the number of bytes that a computer normally use to do calculation. First of all, i want you to see the source code below.
source code
printf("%d\n", sizeof(444444444));
printf("%d\n", 444444444);
printf("%d\n", sizeof(4444444444));
printf("%llu\n", 4444444444);
output
4
444444444
8
4444444444
As you can see, the computer never loses the value. If it were to be too big to fit in an int, The computer itself would automatically extend it's type. I think the reason why the computer never loses the value is because it operates originally on big type already at least bigger than the 8-bit container.
Would you guys let me know the overall mechanism? Thank you for your help in advance.
This has nothing to do with the "calculation ability of [the] computer".
Your example is all about the size of the integer literal you're dealing with, at the compilation stage. An int
on most platforms is four bytes (32 bits). This has a maximum value of 0x7FFF_FFFF or 2147483647. An unsigned int
has a maximum of 0xFFFF_FFFF or 4294967295.
The compiler will typically default to int
for most integer literals, (as with the 4 byte example). Your next value is 0x1_08e8_d71c which is too big for an int
, so the literal is promoted to an 8-byte literal, long long
.
This is probably a warning on most compilers.
GCC, (in 32-bit mode, -m32
) gives the following warning, because long
is only 4 bytes:
warning: integer constant is too large for ‘long’ type
Output
sizeof(int)=4, sizeof(long)=4, sizeof(long long)=8
In 64-bit mode however, (-m64
) GCC is cool with it, because long
is 8-bytes.
sizeof(int)=4, sizeof(long)=8, sizeof(long long)=8
To remedy this, you should use the LL
suffix:
long long val = 4444444444LL;