I have a situation where I need to pack 16 bits into a 64-bit number and later read them back as a signed integer in the range [ -32768, 32768 ). The method I have chosen for this is to compute the number as a signed 16-bit int, immediately cast it to an unsigned 16-bit int, and then upcast this to a 64-bit unsigned int before performing the proper bit shift to get the critical 16 bits into the proper place.
Here is the pseudo-code to create the bit-packed arrangement:
Given int x, y such that x - y >= -32768 and y - x < 32768;
const int MASK_POS = 45;
const unsigned short int u_s = x - y;
unsigned long long int ull_s = u_s;
ull_s <<= MASK_POS;
Here is the pseudo-code to extract the difference in the original numbers:
Given unsigned long long int ull_s with 16 bits encoding a signed integer in the 46th through 61st bits;
const unsigned short int u_s = ((ulls >> MASK_POS) & 0xffff);
const short int s_s = u_s;
const int difference_x_and_y = s_s;
This seems to me like a reasonable way to package a signed integer and extract it. I'm wary of platform-specific behavior when performing bit shifts on negative integers, but I think that converting to the unsigned form of the same number of bits prior to upgrading the number of overall bits in the number, and in reverse extracting the unsigned integer of the desired bit length before converting to a signed integer of equal size, will be safe.
(In case anyone is curious, there will be a LOT going on in the other 48 bits of this 64-bit unsigned integer that stuff ends up in--from the high three bits to the low 31 and the middle 14, everything has been parsed out. I can certainly write some unit tests to ensure that this behavior holds on whatever architecture, but if anyone can see a flaw now that's better to know in advance.)
What you're doing is perfectly fine. Since C++20, signed integers are required to have two's complement representation, and all signed/unsigned conversions are well-defined, and equivalent to std::bit_cast
. Even before that, any implementation you care about would behave in this way.
However, it would probably be better if you used fixed-width types like std::uint16_t
since your code heavily relies on a specific width.
struct quad {
std::int16_t x, y, z, w;
};
inline std::uint64_t pack(quad q) {
// Two-step conversion to std::uint16_t -> std::uint64_t
// to avoid a sign extension when going directly to std::uint64_t.
// Alternatively, mask each operand with 0xffff.
return std::uint64_t{std::uint16_t(q.x)} << 0
| std::uint64_t{std::uint16_t(q.y)} << 16
| std::uint64_t{std::uint16_t(q.z)} << 32
| std::uint64_t{std::uint16_t(q.w)} << 48;
// alternatively, if you don't care about relying on
// platform endianness ...
return std::bit_cast<std::uint64_t>(q); // note: only works if quad is unpadded
}
inline quad unpack(std::uint64_t x) {
// just let implicit conversions do their thing
return { x >> 0, x >> 16, x >> 32, x >> 48 };
// once again, alternatively ...
return std::bit_cast<quad>(x);
}
You can pack integers like this, but it begs the question why you couldn't just use a struct
like quad
directly.
No sane compiler is going to add padding to quad
, and you could make sure of it with
static_assert(sizeof(quad) == sizeof(std::uint64_t));
The compiler also isn't allowed to reorder the members of quad
, so for all intents and purposes, you could just bundle integers up in quad
instead of packing them into an integer.