openglgraphicsrenderingshadow-mapping

What causes shadow acne?


I have been reading up on shadow mapping, and found the following tutorial:

http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-16-shadow-mapping/

It makes sense to me up until the point where the author starts discussing the "shadow acne" artifact. They explain the cause with the following diagram (with no words): enter image description here

I am still having a lot of trouble understanding what actually causes shadow acne and why adding a bias fixes it.

It seems that the resolution of the shadow map has no effect on acne. What is it then? Maybe float precision, or is it something else?


Solution

  • Yes, it is a precision issue. Not really a float problem, just finite precision.

    In theory the shadow map stores "distance to closest object from light". But in practice it stores "distance±eps from light".

    Then when testing, you have your fragments distance to the same light. But again, in practice ± eps2. So if you compare those two values it turns out that eps varies differently when interpolating for shadow map rendering or shading. So if you compare d ± eps < d2 ± eps2, if d2==d, you might get the wrong result because eps!=eps2. But if you compare d ± eps < d2 + max(eps) + max(eps2) ± eps2 you will be fine.

    In this example d2==d. That is called self shadowing. And can be easily fixed with the above bias, or by simply not testing against yourself in raytracing.

    It gets much more tricky with different objects and when eps and eps2 are vastly different. One way to deal with it is to control eps (http://developer.download.nvidia.com/SDK/10.5/opengl/src/cascaded_shadow_maps/doc/cascaded_shadow_maps.pdf). Or one can just take a lot more samples.

    To try to answer the question: The core issue is that shadow mapping compares ideal distances. But those distances are not ideal but quantized. And quantized values are usually fine, but in this case we are comparing them in two different spaces.