openglmappingshaderhdrluminance

Getting maximum/minimum luminance of texture OpenGL


I'm starting with OpenGL, and I want to create a tone mapping - algorithm.

I know that my first step is get the max/min luminance value of the HDR image.

I have the image in a texture in FBO, and I'm not sure how to start.

I think the best way is to pass tex coords to a fragment shader and then go through all the pixels and generates somehow smaller textures.

But, I don't know how to do downsampling manually until I had a 1x1 texture; should I had a lot of FBO? where I create each new texture?

I searched a lot of info but I still have no clear almost anything.

I would appreciate some help to situate myself and to start.

EDIT 1. Here's my shaders, and how I pass texture coords to vertex shader:

To pass texture coords and vertex positions, I draw a quad using VBO:

void drawQuad(Shaders* shad){
  // coords: vertex (3) + texture (2)
  std::vector<GLfloat> quadVerts = {
    -1, 1, 0, 0, 0,
    -1, -1, 0, 0, 1,
    1, 1, 0, 1, 0,
    1, -1, 0, 1, 1}; 

  GLuint quadVbo;
  glGenBuffers(1, &quadVbo);
  glBindBuffer(GL_ARRAY_BUFFER, quadVbo);
  glBufferData(GL_ARRAY_BUFFER, 4 * 5 * sizeof(GLfloat), &quadVerts[0], GL_STATIC_DRAW);
  // Shader attributes
  GLuint vVertex = shad->getLocation("vVertex");
  GLuint vUV = shad->getLocation("vUV");

  glEnableClientState(GL_VERTEX_ARRAY);
  glVertexPointer(3, GL_FLOAT, 3 * sizeof(GLfloat), NULL);
  // Set attribs
  glEnableVertexAttribArray(vVertex);
  glVertexAttribPointer(vVertex, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 5, 0);
  glEnableVertexAttribArray(vUV);
  glVertexAttribPointer(vUV, 2, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 5, (void*)(3 * sizeof(GLfloat)));

  glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);  // Draw

  glBindBuffer(GL_ARRAY_BUFFER, 0);
  glDisableVertexAttribArray(vVertex);
  glDisableVertexAttribArray(vUV);
}

Vertex shader:

#version 420

in vec2 vUV;
in vec4 vVertex;
smooth out vec2 vTexCoord;

uniform mat4 MVP;
void main()
{
  vTexCoord = vec2(vUV.x * 1024,vUV.y * 512);
  gl_Position = MVP * vVertex;
}

And fragment shader:

#version 420

smooth in vec2 vTexCoord;
layout(binding=0) uniform sampler2D texHDR; // Tex image unit binding
layout(location=0) out vec4 color; //Frag data output location
vec4[4] col;
void main(void)
{
    for(int i=0;i<=1;++i){
        for(int j=0;j<=1;++j){
           col[(2*i+j)] = texelFetch(texHDR, ivec2(2*vTexCoord.x+i,2*vTexCoord.y+j),0);
        }
    }
    color = (col[0]+col[1]+col[2]+col[3])/4;    
}

In this test code, I have a texture with size 1024*512. My idea is to render to texture attached to GL_ATTACHMENT_0 in a FBO (layout(location=0)) using this shaders and the texture binded in GL_TEXTURE_0 which has the image (layout(binding=0)). My target is to have the image in texHDR in my FBO texture reducing its size by two.


Solution

  • For downsampling, all you need to do in the fragment shader is multiple texture lookups, then combine them for the output fragment. For example, you could do 2x2 lookups, so each pass would reduce the resolution in x and y by a factor 2.

    Let's say you want to reduce a 1024x1024 image. Then you would render a quad into a 512x512 image. Set it up so your vertex shader simply generates values for x and y between 0 and 511. The fragment shader then calls texelFetch(tex, ivec2(2*x+i,2*y+j)), where i and j loop from 0 to 1. Cache those four values, output min and max in your texture.