c++openglglsltexture-mappingopenscenegraph

Why does OpenSceneGraph map all Sampler2D to the first texture


I am currently writing a program with OpenSceneGraph (3.4.0) and my own glsl (330) shaders. It uses multiple textures for input, then does a multiple render target rendering with a pre render camera and reads in those multiple render target textures with a second camera for deferred shading. Thus both cameras have their own shaders (called geometry_pass and lighting_pass here). My problem: both shaders use the same textures in all sampler2D uniforms when reading.

//in geometry_pass.frag
uniform sampler2D uAlbedoMap;
uniform sampler2D uHeightMap;
uniform sampler2D uNormalMap;
uniform sampler2D uRoughnessMap;
uniform sampler2D uSpecularMap;
[...]
layout (location = 0) out vec4 albedo;
layout (location = 1) out vec4 height;
layout (location = 2) out vec4 normal;
layout (location = 3) out vec4 position;
layout (location = 4) out vec4 roughness;
layout (location = 5) out vec4 specular;
[...]
albedo = vec4(texture(uAlbedoMap, vTexCoords).rgb, 1.0);
height = vec4(texture(uHeightMap, vTexCoords).rgb, 1.0);
normal = vec4(texture(uNormalMap, vTexCoords).rgb, 1.0);
position = vec4(vPosition_WorldSpace, 1.0);
roughness = vec4(texture(uRoughnessMap, vTexCoords).rgb, 1.0);
specular = vec4(texture(uSpecularMap, vTexCoords).rgb, 1.0);    

Here the output is always the color of the uAlbedoMapexcept for the position, which gets exported correctly.

In the lighting pass, when I read in the textures of the geometry pass, again all input textures are the same

//in lighting_pass.frag
uniform sampler2D uAlbedoMap;
uniform sampler2D uHeightMap;
uniform sampler2D uNormalMap;
uniform sampler2D uPositionMap;
uniform sampler2D uRoughnessMap;
uniform sampler2D uSpecularMap;
[...]
vec3 albedo = texture(uAlbedoMap, vTexCoord).rgb;
vec3 height = texture(uHeightMap, vTexCoord).rgb;
vec3 normal_TangentSpace = texture(uNormalMap, vTexCoord).rgb;
vec3 position_WorldSpace = texture(uPositionMap, vTexCoord).rgb;
vec3 roughness = texture(uRoughnessMap, vTexCoord).rgb;
vec3 specular = texture(uSpecularMap, vTexCoord).rgb;

i.e. the position map that was correctly exported has the color of the albedo in the lighting pass as well.

Thus, what seems to be working correctly is the texture output, but what is obviously not working is the input. I have tried to debug this with CodeXL and there I can see that all the images for the geometry_pass have (at some point at least) been correctly bound, they're all visible. The output textures of the framebuffer object confirm that the position texture of the geometry_pass is correct. As far as I can see when going step by step through this, the textures are correctly bound (i.e. the uniform locations are correct).

Now the obvious question: How can I get those textures to be correctly used in the shaders?


Construction of the program

The viewer is an osgViewer::Viewer, so there is only one view. The scene graph is as follows: The displayCamerais the camera from the viewer. Since I'm working with Qt (5.9.1), I reset the GraphicsContext before I do anything else with the scene graph.

osg::ref_ptr<osg::Camera> camera = viewer.getCamera();

osg::ref_ptr<osg::GraphicsContext::Traits> traits = new osg::GraphicsContext::Traits;
traits->windowDecoration = false;
traits->x = 0;
traits->y = 0;
traits->width = 640;
traits->height = 480;
traits->doubleBuffer = true;

camera->setGraphicsContext(new osgQt::GraphicsWindowQt(traits.get()));
camera->getGraphicsContext()->getState()->setUseModelViewAndProjectionUniforms(true);
camera->getGraphicsContext()->getState()->setUseVertexAttributeAliasing(true);
camera->setClearMask(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
camera->setClearColor(osg::Vec4(0.2f, 0.2f, 0.6f, 1.0f));
camera->setViewport(new osg::Viewport(0, 0, traits->width, traits->height));
camera->setViewMatrix(osg::Matrix::identity());

I then set displayCamera to this viewer camera, create a second camera for render to texture (thus called rttCamera) and add it as a child to the displayCamera. I add the scene (consisting out of agroup node containing a geode containing a hardcoded geometry) to the rttCamera and in the end create a screen quad geometry (below a geode, which in turn is child of matrix transform; this matrix transform is what is added as a child to displayCamera).

Thus the displayCamera has the two children rttCamera and the matrixtransform->screenQuad. The rttCamera has the child scene->geode. Both cameras have their own rendermask, the screen quad uses the displayCameras rendermask, the scene the rttCameras rendermask.

With the scene node I read in 5 Textures from file (all bitmaps) and then render the rttCamera into the Framebuffer Object with multiple render targets (for deferred shading).

//model is the geode in the scene group node
osg::ref_ptr<osg::StateSet> ss = model->getOrCreateStateSet();
ss->addUniform(new osg::Uniform(name.toStdString().c_str(), counter));
ss->setTextureAttributeAndModes(counter, pairNameTexture.second, osg::StateAttribute::ON | osg::StateAttribute::PROTECTED);

.

//camera is the rttCamera
//bufferComponent is constructed by osg::Camera::COLOR_BUFFER0+counter
//(where counter is just an integer that gets incremented)
//texture is an osg::Texture2D that is newly created
camera->attach(bufferComponent, texture);
//the textures get stored to assign them later on
gBufferTextures[name] = texture;

These mrt textures are bound to the screenquad as textures

//ssQuad is the stateset of the screen quad geode
QString uniformName = "u" + name + "Map";
uniformName[1] = uniformName[1].toUpper();

ssQuad->addUniform(new osg::Uniform(uniformName.toStdString().c_str(), counter));
osg::ref_ptr<osg::Texture2D> tex = gBufferTextures[name];
ssQuad->setTextureAttributeAndModes(counter, gBufferTextures[name], osg::StateAttribute::ON | osg::StateAttribute::OVERRIDE);

other set ups are the rendertarget (FBO for rttCamera, Framebuffer for displayCamera), lighting (off in both cameras). the rttCamera gets the same graphics context that it is created for the displaycamera (i.e. the graphics context object is passed to the rttCamera and set as its own graphics context).

The texture attachments are created as follows (where there is no difference in using width and height or the power-of-2 values for size)

osg::ref_ptr<osg::Texture2D> Utils::createTextureAttachment(int width, int height)
{
    osg::Texture2D* texture = new osg::Texture2D();
    //texture->setTextureSize(width, height);
    texture->setTextureSize(512, 512);
    texture->setInternalFormat(GL_RGBA);
    texture->setFilter(osg::Texture2D::MIN_FILTER, osg::Texture2D::LINEAR);
    texture->setFilter(osg::Texture2D::MAG_FILTER, osg::Texture2D::LINEAR);

    return texture;
}

Let me know if there is more crucial-for-solving code or information missing.


Solution

  • So I finally found the error. My counter has been an unsigned int which apperantly is not allowed. Since osg is hiding so much of the errors from me, I didn't see that this was an issue...

    After changing it to just a normal int, I now get different textures into my shader.