There are often times when you know for a fact that your loop will never run more than x number of times where x can be represented by byte or a short, basically a datatype smaller than int.
Why do we use int which takes up 32 bits (with most languages) when something like a byte would suffice which is only 8 bits.
I know we have 32 bit and 64 bit processes so we can easily fetch the value in a single trip but it still does consume more memory. Or what am I missing here?
UPDATE: Just to clarify. I am aware that speed wise there is no difference. I am asking about the impact on memory consumption.
In C, an "int" is defined as the most efficient integer type for the current machine.
It usually match the registers of the CPU, that's how it is the most eficient.
Using a smaller type of integer value may result in some bit-shifting or bit masking at the CPU level so you would get no gain...