androidopengl-esopengl-es-2.0shader

Is discard bad for program performance in OpenGL?


I was reading this article, and the author writes:

Here's how to write high-performance applications on every platform in two easy steps:
[...]
Follow best practices. In the case of Android and OpenGL, this includes things like "batch draw calls", "don't use discard in fragment shaders", and so on.

I have never before heard that discard would have a bad impact on performance or such, and have been using it to avoid blending when a detailed alpha hasn't been necessary.

Could someone please explain why and when using discard might be considered a bad practise, and how discard + depthtest compares with alpha + blend?

Edit: After having received an answer on this question I did some testing by rendering a background gradient with a textured quad on top of that.


Solution

  • It's hardware-dependent. For PowerVR hardware, and other GPUs that use tile-based rendering, using discard means that the TBR can no longer assume that every fragment drawn will become a pixel. This assumption is important because it allows the TBR to evaluate all the depths first, then only evaluate the fragment shaders for the top-most fragments. A sort of deferred rendering approach, except in hardware.

    Note that you would get the same issue from turning on alpha test.