After quite a long time of spending in multiple programs I have found that, depending on the platform, I sometimes need to lower the RAM usage drastically, because of highly limited resources on some platforms. I normally store large maps and matrices in terms of these types, so switching from int32 to int16 or from float to double (in case they are actually of different size) easily reduces my usage by almost a half. Thus, I have just added redefinitions as such:
typedef double Float;
typedef int32_t Int;
typedef uint32_t UInt;
This allows me to quickly adjust all important primitive types in my program. Note that none of my integers in the program actually exceed the size of a 2 byte integer, so there is no issue by using any of int16 to int64.
Additionally, it seems a bit more readable to just have a nice "Int" there instead of "uint32_t". And in some cases I have observed a change in performance by both reducing the size of primitive types and increasing it.
My question is: are there any disadvantages that I simply miss? I couldn't really find anything about this topic on SO yet, so please lead me there if I have missed that as well. The code is primarily for me, others might see it, but it would in every case be given by me personally or with a proper documentation.
EDIT: Sorry for the past mistake, I indeed use typedefs.
typedef int32_t Int;
is NOT BAD, but typedef double Float;
is NOT GOOD. Because it's confusing: a Float
is, in fact, a double!?
Why not use preprocessor to define two sets of types, one for large types, and one for small types.
#ifdef LARGE
typedef int32_t Int;
typedef double Real;
#else
typedef int16_t Int;
typedef float Real;
#endif
void f() {
cout << sizeof(Int) << endl;
cout << sizeof(Real) << endl;
}
To use large types: g++ -o test test.cpp -DLARGE
To use small types: g++ -o test test.cpp