cstdintcstdint

How are standard integers from <stdint.h> translated during compilation?


In C, it is common (or at least possible) to target different processor architectures with the same source code. It is also common that the processor architectures define integer sizes differently. To increase code portability and avoid integer size limitations, it is recommended to use the C standard integer header . However, I'm confused on how this is actually implemented.

If I were write a little C program written for x86, and then decide to port that over to an 8 bit microcontroller, how does the microcontroller compiler know how to convert 'uint32_t' to its native integer type?

Is there some mapping requirement when writing C compilers? As in, if your compiler is to be C99 compatible, you need to have a mapping feature that replaces all uint32_t with the native type?

Thanks!


Solution

  • Typically <stdint.h> contains the equivalent of

    typedef int int32_t;
    typedef unsigned uint32_t;
    

    with actual type choices appropriate for the current machine.

    In actuality it's often much more complicated than that, with a multiplicity of extra, subsidiary header files and auxiliary preprocessor macros, but the effect is the same: names like uint32_t end up being true type names, as if defined by typedef.

    You asked "if your compiler is to be C99 compatible, you need to have a mapping feature?", and the answer is basically "yes", but the "mapping feature" can just be the particular types the compiler writer chooses in its distributed copy of stdint.h. (To answer your other question, yes, there are at least as many copies of <stdint.h> out there as there are compilers; there's not one master copy or anything.)

    One side comment. You said, "To increase code portability and avoid integer size limitations, it is recommended to use the C standard integer header". The real recommendation is that you use that header when you have special requirements, such as for sizes with an exact type. If for some reason you need a signed type of, say, exactly 32 bits, then by all means, use int32_t from stdint.h. But most of the time, you will find that the "plain" types like int and long are perfectly fine. Please don't let anyone tell you that you must pick an exact size for every variable you declare, and use a type name fro stdint.h to declare it with.