I have a small OpenGL app for some scientific visualization with deferred rendering pipeline. I have got 2 passes: geometric pass, where I render textures with positions, normals, albedo, segmentation, e.t.c.; and lighting pass, where I just map some of that data to the quad and render it on the screen or even save some images on the hard drive. So, classic deferred rendering.
Now I need to add wireframe rendering to the additional texture.
I thought about doing it in geometry shader, but it seemed kind of complicated and the performance wasn't an issue, so I just set a third pass with glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
where I render it to the texture and then pass it to the lighting pass along with the other stuff.
It works okay, but I was wondering if it's possible to somehow use depth buffer and not render wireframes behind the model? I mean, sure I can cull backface polygons, but there will be still some lines behind the frontface polygons which are also front faced. What I want is to cull it as if the polygons were filled, but only render wireframe.
It would be also okay to render a model and then render a wireframe on it, but I can't do this because I render a model to the texture in the geometry pass with glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
and a wireframe in another pass with glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
so I kind of cannot use the default depth buffer.
So, you guys have any thoughts?
Okay, I did solve the problem. Before implementing solid wireframe rendering as @httpdigest
suggested, I tried to just save depth buffers from both passes and render the model over wireframe if the depth is less than the wireframe's one.
It turned out pretty much what I needed.
But I'm almost sure the approach with geometry shader must be much faster and less memory consuming. But again, I am not developing a 60fps game, so...