I've read many articles about what is and how to set a reversed depth buffer in OpenGL (just search for opengl reversed depth
on Google and I've read 6 positions). None of them mentioned any vertex shader changes.
Just 4 steps:
GL_GREATER
or GL_GEQUAL
so I sat down and wrote a simple class that renders a single 3D mesh (assimp as importer). Nothing fancy, it loads a mesh and draws it with the Draw()
function. I've configured these 4 steps for the reversed depth buffer.
//step 1
glClipControl(GL_LOWER_LEFT, GL_ZERO_TO_ONE);
//step 2
float color[] = {0.1f, 0.1f, 0.1f, 1.0f};
glClearBufferfv(GL_COLOR, 0, color);
glClearBufferfi(GL_DEPTH_STENCIL, 0, 0.0f, 1);
//step 3
float fov = glm::radians(60.0f);
float aspect = 800.0f / 600.0f;
float near = 0.1f;
float far = 1000.0f;
auto proj = glm::perspective(fov, aspect, far, near);
//step 4
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_GREATER);
//meanwhile create and bind a uniform buffer for view, projection etc. matrices
mesh.Draw();
the code looks fine to me, so I ran the app and there was no mesh rendered. I ran it through the RenderDoc, everything seemed fine.
Vertex shader:
#version 460 core
layout(location = 0) in vec3 a_position;
layout(location = 1) in vec3 a_normal;
layout(location = 2) in vec3 a_tangent;
layout(location = 3) in vec3 a_binormal;
layout(location = 4) in vec2 a_uv;
layout(location = 0) out vec2 v_uv;
layout(std140, binding = 0) uniform FrameUniforms
{
mat4 viewProjection;
} u_frame;
layout(std140, binding = 1) uniform InstanceData
{
mat4 model;
} u_instanceData;
void main()
{
v_uv = vec2(a_uv.x, 1.0f - a_uv.y);
vec4 position = u_frame.viewProjection * u_instanceData.model * vec4(a_position, 1.0);
gl_Position = position;
}
I can't spot what's wrong.
I've read these articles again there was nothing written about the vertex shader. So "Why it does not work," I asked.
My thought was "If OpenGL default depth range is [-1, +1], maybe I have to change the vertex .z position? But I've already done a reversed depth buffer in DX11 and it has worked... Also, it does not make sense to modify vertex z position, because I've already tested rendering with a depth range of [0, 1] but without reversing the depth buffer values". Even though I thought it might be stupid, I did something like this:
position.z = position.z * -0.5 + 0.5;
and BOOM, now the mesh shows up on the screen...
My question is, is it so obvious, that the vertex .z value has to be modified, so none of these articles mentioned this?
What's the difference between OpenGL and DX11 that I have to do it in OpenGL?
Or there is something I missed?
It was the problem with the projection function I was using.
using glm::perspectiveZO();
instead of glm::perspective()
; fixed the problem