openglshaderopengl-4compute-shadershader-storage-buffer

Array of structures into Compute Shader


Writing a simple compute shader in OpenGL to understand how it works, I can't manage to obtain the wanted result.

I want to pass to my compute shader an array of structures colourStruct to color an output texture.

I would like to have a red image when "wantedColor" = 0 in my compute shader and a green image "wantedColor" = 1, blue for 2.

But I actually have only red when "wantedColor" = 1 or 2 or 3 and black when "wantedColor" > 2...

If someone has an idea, or maybe I did not understand the compute shader inputs ideas.

Thank you for your help, here is the interesting part of my code.

My compute shader :

 #version 430 compatibility

layout(std430, binding=4) buffer Couleureuh
{
  vec3 Coul[3]; // array of structures
};

layout(local_size_x = 1, local_size_y = 1) in;
layout(rgba32f, binding = 0) uniform image2D img_output;

void main() {

  // base pixel colour for image
  vec4 pixel = vec4(0.0, 0.0, 0.0, 1.0);

  // get index in global work group i.e x,y, position
  ivec2 pixel_coords = ivec2(gl_GlobalInvocationID.xy);
  ivec2 dims = imageSize (img_output);


  int colorWanted = 0;
  pixel = vec4(Coul[colorWanted], 1.0);

  // output to a secific pixel in the image
  imageStore (img_output, pixel_coords, pixel);

}

Compute shader and SSBO initialization:

    GLuint structBuffer;
    glGenBuffers(1, &structBuffer);
    glBindBuffer(GL_SHADER_STORAGE_BUFFER, structBuffer);
    glBufferData(GL_SHADER_STORAGE_BUFFER, 3*sizeof(colorStruct), NULL, GL_STATIC_DRAW);

        GLint bufMask = GL_MAP_WRITE_BIT | GL_MAP_INVALIDATE_BUFFER_BIT; // invalidate makes a ig difference when re-writting

    colorStruct *coul;
    coul = (colorStruct *) glMapBufferRange(GL_SHADER_STORAGE_BUFFER, 0, 3*sizeof(colorStruct), bufMask);


    coul[0].r = 1.0f;
    coul[0].g = 0.0f;
    coul[0].b = 0.0f;

    coul[1].r = 0.0f;
    coul[1].g = 1.0f;
    coul[1].b = 0.0f;

    coul[2].r = 0.0f;
    coul[2].g = 0.0f;
    coul[2].b = 1.0f;

    glUnmapBuffer(GL_SHADER_STORAGE_BUFFER);

    glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 4, structBuffer);

    m_out_texture.bindImage();

    // Launch compute shader
    m_shader.use();

    glDispatchCompute(m_tex_w, m_tex_h, 1);

    // Prevent samplign before all writes to image are done
    glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);

Solution

  • vec3 are always 16-byte aligned. As such, when they're in an array, they act like vec4s. Even with std430 layout.

    Never use vec3 in interface blocks. You should either use an array of floats (individually access the 3 members you want) or an array of vec4 (with an unused element).