I'm trying to read a 3D texture I rendered using an FBO. This texture is so large that glGetTexImage
results in GL_OUT_OF_MEMORY
error due to failure of nvidia
driver to allocate memory for intermediate storage* (needed, I suppose, to avoid changing destination buffer in case of error).
So I then thought of getting this texture layer by layer, using glReadPixels
after I render each layer. But glReadPixels
doesn't have layer index as a parameter. The only place where it actually appears as something that directs I/O to the particular layer is gl_Layer
output in the geometry shader. And that is for the writing stage, not reading.
As I tried simply doing the calls to glReadPixels
anyway after I render each layer, I only got the texels for layer 0. So glReadPixels
at least doesn't fail to get something.
But the question is: can I get arbitrary layer of a 3D texture using glReadPixels
? And if not, what should I use instead, given the above described memory constraints? Do I have to sample the layer from 3D texture in a shader to render the result to a 2D texture, and read this 2D texture afterwards?
*It's not a guess, I've actually tracked it down to a failing malloc
call (with the size of the texture as argument) from within the nvidia driver's shared library.
Yes, glReadPixels
can read other slices from the 3D texture. One just has to use glFramebufferTextureLayer
to attach the correct current slice to the FBO — instead of attaching the full 3D texture as the color attachment. Here's the replacement code for glGetTexImage
(a special FBO for this, fboForTextureSaving
, should be generated beforehand):
GLint origReadFramebuffer=0, origDrawFramebuffer=0;
gl.glGetIntegerv(GL_READ_FRAMEBUFFER_BINDING, &origReadFramebuffer);
gl.glGetIntegerv(GL_DRAW_FRAMEBUFFER_BINDING, &origDrawFramebuffer);
gl.glBindFramebuffer(GL_FRAMEBUFFER, fboForTextureSaving);
for(int layer=0; layer<depth; ++layer)
{
gl.glFramebufferTextureLayer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
texture, 0, layer);
checkFramebufferStatus("framebuffer for saving textures");
gl.glReadPixels(0,0,w,h,GL_RGBA,GL_FLOAT, subpixels+layer*w*h*4);
}
gl.glBindFramebuffer(GL_READ_FRAMEBUFFER, origReadFramebuffer);
gl.glBindFramebuffer(GL_DRAW_FRAMEBUFFER, origDrawFramebuffer);
Anyway, this is not a long-term solution to the problem. The first reason for GL_OUT_OF_MEMORY
errors with large textures is actually not lack of RAM or VRAM. It's subtler: each texture allocated on GPU is mapped to the process' address space (at least on Linux/nvidia
). So if your process doesn't malloc
even half of the RAM available to it, its address space may be already used by these large mappings. Add to this a bit of memory fragmentation, and you get either GL_OUT_OF_MEMORY
, or malloc
failure, or std::bad_alloc
somewhere even earlier than expected.
The proper long-term solution is to embrace the 64-bit reality and compile your app as 64-bit code. This is what I ended up doing, ditching all this layer-by-layer kludge and simplifying the code quite a bit.