The OpenGL functions glVertexAttribPointer
and glVertexAttribFormat
allow the user to specify the format for data which will be bound to a given attribute variable in the shader program when rendering. Vertex attribute format is information like the data type (int
, float
, byte
, etc), the number of dimensions in the attribute variable (vec2
, vec3
, etc), whether the data ought to be normalized, and the offset into the vertex array for the start position of the array of data. These functions specify the format when one is building a vertex array object (VAO), and the specified format is part of the state of the VAO. So here is my question:
Why is the format of the data to be associated with the attribute a part of the state of the VAO, and not a part of the state of the attribute? In other words, why is the format of the data associated with the VAO rather than the attribute? Under what circumstances will I have VAOs which use different formats for the same attribute?
For more clarity, here is an example which should illustrate why I'm confused. Imagine in my vertex shader I declare the variable:
in vec3 position;
Now I'm getting the attribute location in my OpenGL application:
GLint positionAttribute = glGetAttribLocation(myProgram, "position");
Now when I create a VAO, and I specify the data format like this:
glVertexAttribFormat(positionAttribute, 3, GL_FLOAT, GL_FALSE, 0);
Since the format is associated with the VAO, every time I create a VAO I have to specify the format. The 'position' attribute is vec3
, so I will always specify 3 and GL_FLOAT
to glVertexAttribFormat
. So why is OpenGL designed in this way, so that I potentially have to call glVertexAttribFormat
every time I create a VAO, and specify a format which will remain constant? It seems to me that I should have specified the format when I called glGetAttribLocation
, so that I only do it once.
Well, design decisions are made in every API. There mostly isn't only one way to accomplish a given task, and to some degree this is just the way it was defined in this case.
I don't have any direct insight on the decision making behind this, but can think of a few considerations that provide valid motivation for this one.
Operation without shaders : OpenGL has evolved over a long time. Shaders were not part of the API initially, and there are still versions of the API that can operate without shaders (Compatibility Profile). If there are no shaders, your idea of deriving the type of the attributes from shader attributes would not work at all.
Support for data types that do not correspond to GLSL types: While using something like vec4
in the shader and GL_FLOAT
for the attribute specification is most common, it's not the only option. For example, attributes can be specified as GL_HALF_FLOAT
or crazy formats like GL_INT_2_10_10_10_REV
. These formats have possible uses if you really want to save memory for your vertex data. Or if you already have the vertices in this format, for example because you're porting Direct3D code, which supports them. These formats do not directly correspond to types in GLSL, so they could not be supported without specifying the attribute type explicitly.
Dependency between vertex setup state and program state: This is more conceptual. The way it works now, vertex setup state and program state is orthogonal. Avoiding dependency between concepts that do not have to be dependent is always a good design goal. If vertex attribute formats depend on the currently bound program, you break this independence. For example, when a new program is bound, the interpretation of vertex attribute data could change, which means that the vertex setup state would have to be updated. These kinds of state dependencies are very undesirable due to the resulting complexity and inefficiency.