When rendering a sky with a fixed texture in 3D games, people often create 6 textures in a cube map first, and then render a cube around the camera. In GLSL, you can access the pixels in the textures with a normal instead of a texture coordinate, and you can easily get this normal by normalizing the fragment position relative to the camera. However, this process can be done with any shape that surrounds the camera, because when you normalize each position it will always result in a sphere. Now I'm wondering: Why is it always a cube and not a tetrahedron? Rendering a cube takes 12 triangles, a tetrahedron only 4. And as I already said, any shape that surrounds the camera works. So tetrahedrons take less VRAM and are faster to render, without any downsides? Why not use them?
You don't need some environment geometry at all. All you need to do is drawing a full screen quad, and just compute the correct texture coordinates for it. Now with modern GL, we don't even need to supply vertex data for this, we can use attributless rendering:
Vertex Shader:
#version 330 core
out vec3 dir;
uniform mat4 invPV;
void main()
{
vec2 pos = vec2( (gl_VertexID & 2)>>1, 1 - (gl_VertexID & 1)) * 2.0 - 1.0;
vec4 front= invPV * vec4(pos, -1.0, 1.0);
vec4 back = invPV * vec4(pos, 1.0, 1.0);
dir=back.xyz / back.w - front.xyz / front.w;
gl_Position = vec4(pos,1.0,1.0);
}
where invPV
is inverse(Projection*View)
, so it will take your camera orientation as well as the projection into account. This can in principle be even further simplified, depending on how much constraints you can put on the projection matrix.
Fragment Shader:
#version 330 core
in vec3 dir;
out color;
uniform samplerCube uTexEnv;
void main()
{
color=texture(uTexEnv, dir);
}
To use this, you simply need to bind an empty VAO and your texture, upload your invPV
matrix and call glDrawArrays(GL_TRIANGLE_STRIP, 0, 4)
.
This approach could of course be used for spherical texture mapping instead of cube maps