I have this code sample from threejs example, where a float value is converted to vec4
. I have seen this logic on few other forums but no explanation.
I have seen this link too Packing float into vec4 - how does this code work?
It says the vec4
will be stored in a 32-bit RGBA8 buffer finally.
Since we are passing the depth value into a color buffer, how will OpenGL know what to do with this?
Also since vec4
has 4 components, each of 4 bytes making it 16 bytes which makes it 16 * 8 bits. How does this fit into 32-bit RGBA8?
vec4 pack_depth( const in float depth ) {
const vec4 bit_shift = vec4(256.0 * 256.0 * 256.0, 256.0 * 256.0, 256.0, 1.0);
const vec4 bit_mask = vec4(0.0, 1.0 / 256.0, 1.0 / 256.0, 1.0 / 256.0);
vec4 res = fract(depth * bit_shift);
res -= res.xxyz * bit_mask;
return res;
}
void main() {
vec4 pixel = texture2D(texture, vUV);
if (pixel.a < 0.5) discard;
gl_FragData[0] = pack_depth(gl_FragCoord.z);
}
256 is the number of different values a 8 bit data can represent. Let us stop working in binary for a second and work in the familiar decimal. If we have 2 channels that can only store 1 digit each (0-9), how do we pack something like 45? Obviously, we pack 5 into one digit and 40/10 or 4 into another digit. Then in our unpack function we just do the reverse: 4*10 + 5 = 45. 256 is something similar to the 10 in our decimal example.
OpenGL doesn't, but your own application code can make sense of it.
I'm not sure if I understand this part correctly, but the code you're showing is packing one float into 8*4 bit RGBA.