c++openglvertex-buffervertex-array-objectvertex-attributes

Using multiple VBO in a VAO


I try to use 2 VBO inside a VAO and I end up with a crash (far beyond my app).

The idea is to make a first VBO (and optionnally an IBO) to stucture the geometry.

This worked well, until I get the idea to add a second VBO for the model matrix as a vertex attribute instead of an uniform.

So, when I declare my mesh I do as follow (reduced code) :

GLuint vao = 0;

glCreateVertexArrays(1, &vao);

glBindVertexArray(vao);

GLuint vbo = 0;

glCreateBuffers(1, &vbo);

glNamedBufferStorage(vbo, ...); // Fill the right data ...

for ( ... my attributes ) // Position, normal, texcoords ...
{
    glVertexArrayAttribFormat(vao, attribIndex, size, GL_FLOAT, GL_FALSE, relativeOffset);

    glVertexArrayAttribBinding(vao, attribIndex, bindingIndex);

    glEnableVertexArrayAttrib(vao, attribIndex);
} -> this gives me the "stride" parameter for below.

glVertexArrayVertexBuffer(vao, 0/*bindingindex*/, vbo, 0, stride/*Size of one element in vbo in bytes*/);

GLuint ibo = 0;

glCreateBuffers(1, &ibo);

glNamedBufferStorage(ibo, ...); // Fill the right data ...

glVertexArrayElementBuffer(vao, ibo);

Until there, everything is fine, all I have to do is to call glBindVertexArray() and a glDrawXXX() command, I have something perfect on screen.

So, I decided to remove the modelMatrix uniform from the shader to use a mat4 attribute, I could have choose an UBO instead but I want to extend the idea to instancing rendering by providing several matrices.

So, I tested with one model matrix in a VBO and just before the rendering, I do as follow (the VBO is built the same way before, I just put 16 floats for an identity matrix) :

glBindVertexArray(theObjectVAOBuiltBefore);

const auto bindingIndex = static_cast< GLuint >(1); // Here next binding point for the VBO, I guess...
const auto stride = static_cast< GLsizei >(16 * sizeof(GLfloat)); // The stride is the size in bytes of a matrix

glVertexArrayVertexBuffer(theObjectVAOBuiltBefore, bindingIndex, m_vertexBufferObject.identifier(), 0, stride); // I add the new VBO to the currentle VAO which have already a VBO (bindingindex 0) and an IBO

// Then I describe my new VBO as a matrix of 4 vec4.
const auto size = static_cast< GLint >(4);

for ( auto columnIndex = 0U; columnIndex < 4U; columnIndex++ )
{
    const auto attribIndex = static_cast< unsigned int >(VertexAttributes::Type::ModelMatrix) + columnIndex;

    glVertexArrayAttribFormat(theObjectVAOBuiltBefore, attribIndex, size, GL_FLOAT, GL_FALSE, 0);

    glVertexArrayAttribBinding(theObjectVAOBuiltBefore, attribIndex, bindingIndex);

    glEnableVertexArrayAttrib(theObjectVAOBuiltBefore, attribIndex);

    glVertexAttribDivisor(attribIndex, 1); // Here I want this attribute per instance.
}

glDrawElementsInstanced(GL_TRIANGLES, count, GL_UNSIGNED_INT, nullptr, 1);

And the result is a beautiful crash, I don't have any clue because the crash occurs within the driver where I can't have a debug output.

Is my idea a complete garbage ? Or there is something I missed ?


Solution

  • I found the error glVertexAttribDivisor() is part of the old ways (like glVertexAttribPointer(), ...), I switched to glVertexBindingDivisor()/glVertexArrayBindingDivisor() and now there is no crash at all.

    Answers were there : https://www.khronos.org/opengl/wiki/Vertex_Specification#Separate_attribute_format