According to the opengl's reference page, glTexImage2D's structure looks like this:
void glTexImage2D( GLenum target,
GLint level,
GLint internalformat,
GLsizei width,
GLsizei height,
GLint border,
GLenum format,
GLenum type,
const GLvoid * data);
As far as I know, the last 3 parameters are how the function should interpret the image data given by const GLvoid * data
Meanwhile, I was studying the topic of framebuffers. And in the section called 'Floating point framebuffers' of this link https://learnopengl.com/Advanced-Lighting/HDR, the writer creates a framebuffer's color attachment like this
glBindTexture(GL_TEXTURE_2D, colorBuffer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGBA, GL_FLOAT, NULL);
My question is, why did he give GL_FLOAT as the parameter for GLenum type
? If const GLvoid * data
is NULL
anyway, is there a need to use GL_FLOAT? I first thought it's something related to GL_RGBA16F
, but 16 bits is 2 bytes and floats are 4 bytes, so I guess it's not related at all.
Furthermore, before this tutorial, the writer used to make color attachments like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
In this case, why did he use GL_UNSIGNED_BYTE
for the GLenum type
parameter?
Whether the last parameter is NULL or not (assuming PBOs aren't employed), the format
and the type
parameters must always be legal values relative to the internalformat
parameter.
That being said, it is not strictly necessary that the pixel transfer format
and type
parameters exactly match the internalformat
. But they do have to be compatible with it, in accord with the rules on pixel transfer compatibility. In the specific cases you cite, the format
and type
values are all compatbile with either of the internalforamt
s used. So you could in theory do this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
That being said, you shouldn't, for two reasons:
You shouldn't write code that you wouldn't want executed for real. Having an implementation manually convert from unsigned normalized bytes into 16-bit floats kills performance. Even if it's not actually going to do that conversion because the pointer is NULL, you still would never want it to actually happen. So write the code that makes sense.
You have to remember all of the rules of pixel transfer compatibility. Because if you get them wrong, if you ask for an illegal conversion, you get an OpenGL error and thus broken code. If you're not used to always think about what your format
and type
parameters are, it's really easy to switch to integer textures and get an immediate error because you didn't use one of the _INTEGER
formats. Whereas if you're always thinking about the pixel transfer parameters that represent the internalformat
you're actually using, you'll never encounter that error.