openglshaderhlslfragment-shadercg

Vertex to Pixel Shader TEXCOORD interpolation precision issues


I think I'm experiencing precision issues in the pixel shader when reading the texcoords that's been interpolated from the vertex shader.

My scene constists of some very large triangles (edges being up to 5000 units long, and texcoords ranging from 0 to 5000 units, so that the texture is tiled about 5000 times), and I have a camera that is looking very close up at one of those triangles (the camera might be so close that its viewport only covers a couple of meters of the large triangles). When I pan the camera along the plane of the triangle, the texture is lagging and jumpy. My thought is that I am experiencing lack of precision on the interpolated texcoords.

Is there a way to increase the precision of the texcoords interpolation?

My first though was to let texcoord u be stored in double precision in the xy-components, and texcoord v in zw-components. But I guess that will not work since the shader interpolation assumes there are 4 separate components of single-precision, and not 2 components of double-precision?

If there is no solution on the shader side, I guess I'll just have to tesselate the triangles into finer pieces? I'd hate to do that just for this issue though.. Any ideas?

EDIT: The problem is also visible when printing texcoords as colors on the screen, without any actual texture sampling at all.


Solution

  • You're right, it looks like a precision problem. If your card supports it, you can indeed use double precision floats for interpolation. Just declare the variables as dvec2 and it should work.

    The shader interpolation does not assumes there are 4 separate 8bit components. In recent cards, each scalar (ie. component in a vec) is interpolated separately as a float (or a double). Older cards, that could only interpolate vec4s, were also working with full floats (but these ones probably don't support doubles).