I am fairly new to OpenGL and trying to reinvent the wheel (for fun) by creating my own simple game engine and now I'm trying to make a HUD with text. To do so, I am programmatically generating font texture maps for the fragment shader to use to texture quads, as far as I understand, this is the normal approach?
This works mostly fine, the text appears on the screen with a transparent background allowing you to see whatever is behind it as expected (fig. 1)
If I then add another text mesh in the same area as the first but with a higher z-index (closer to the camera), it renders over the top of the previous text and blends (correct usage?) in the expected manner (fig. 2)
However, if I then reverse the z-index - so that the first string (the pink one) is closer to the camera (but the render order is unchanged), I see the effect in fig. 3 where - although the text background is still transparent, it seems to have overwritten the other text rather than blend with it.
Note: the same effect occurs if I reverse the order in which the two meshes are rendered except that the white text will be the offender instead.
In other words, why does it only blend correctly if the order of the z-indices matches the render order? My hypothesis is that if I try to render something "behind" a quad with a partially transparent background, I have to render it first, other wise the depth testing will slice the top of it off. Is this a fundamental rule of graphics rendering that I should get used to? Do I just need to keep track of the stacking order and ensure I render the items in the right order?
Though you may be able to look through the transparent parts of the texture, OpenGL can't. When it blends the color, it still sets the depth buffer to a certain value, and when you want to render something behind it, it gets obstructed by the invisible parts.
To fix this you can disable depth testing or you can sort the triangles from back to front.