This arose from a question earlier today on the subject of bignum libraries and gcc specific hacks to the C language. Specifically, these two declarations were used:
typedef unsigned int dword_t __attribute__((mode(DI)));
On 32 bit systems and
typedef unsigned int dword_t __attribute__((mode(TI)));
On 64-bit systems.
I assume given this is an extension to the C language that there exists no way to achieve whatever it achieves in current (C99) standards.
So my questions are simple: is that assumption correct? And what do these statements do to the underlying memory? I think the result is I have 2*sizeof(uint32_t)
for a dword
in 32-bit systems and 2*sizeof(uint64_t)
for 64-bit systems, am I correct?
These allow you to explicitly specify a size for a type without depending on compiler or machine semantics, such as the size of 'long' or 'int'.
They are described fairly well on this page.
I quote from that page:
QI: An integer that is as wide as the smallest addressable unit, usually 8 bits.
HI: An integer, twice as wide as a QI mode integer, usually 16 bits.
SI: An integer, four times as wide as a QI mode integer, usually 32 bits.
DI: An integer, eight times as wide as a QI mode integer, usually 64 bits.
SF: A floating point value, as wide as a SI mode integer, usually 32 bits.
DF: A floating point value, as wide as a DI mode integer, usually 64 bits.
So DI
is essentially sizeof(char) * 8
.
Further explanation, including TI
mode, can be found here (possibly better than the first link, but both provided for reference).
So TI
is essentially sizeof(char) * 16
(128 bits).
The old links are now dead, so here's the official GCC documentation.