I want to pack 3 signed 4 bit integers (4 bits data, 1 bit sign bit) into one 16 bit integer, but I have no idea how to do it or where to start :(
I need this to represent a position in a 3D grid in as little data as possible (Since with higher grid sizes, it REALLY adds up). If it helps, I'm using GLM (OpenGL Mathematics Library, So I have access to functions such as glm::sign()
)
If possible, please give me the code to pack and unpack it.
Thankyou
@JDługosz 's answer is great if you're on a platform that supports that struct syntax. However, you mention OpenGL.
Here's a version that will work in a shader.
Basically, test the sign in a platform-agnostic way, set a bit for that (1 for negative, 0 for positive), add the lower four bits of the int
you want to pack, then shift the result over by five bits to make room for the next value.
Since you're dealing with values from -15 to +15, you can simplify things a bit. Rather than checking the sign, just add a constant to the value to force it to be positive. (Though, I'd recommend adding an assert
on the packing side to make sure that input values will actually fit within 4 bits.) When unpacking, subtract that constant.
TL;DR: Convert your input into a positive integer, grab the lower 5 bits, and mask/shift/add.
int pack3 (int a, int b, int c)
{
a = (a + 16) & 0x1F;
b = (b + 16) & 0x1F;
c = (c + 16) & 0x1F;
return (a << 10) | (b << 5) | c;
}
void unpack3 (int p, int &a, int &b, int &c)
{
// The 3 mask & subtraction ops could be done in one step on P, but
// I left them separate here for something resembling clarity.
c = (p & 0x1f) - 16;
b = ((p >> 5) & 0x1f) - 16;
a = ((p >> 10) & 0x1f) - 16;
}
For a shader implementation, unpack3()
will need to replace &
references with inout
or equivalent for your shader model & language.