floating-pointdirectx-11hlslpixel-shader

How do pixel values behave in DirectX 11 HLSL shaders?


When sampling texel values in a pixel shader, the sampler always returns a float4. However, the texture itself may contain any of a wide number of formats based on DXGI_FORMAT. It seems fairly straight-forward that any of the _UNORM formats will ensure that all of the values in that float4 will be between 0 and 1. Back in the DirectX9 days, it was pretty much assumed that, regardless of the pixel format, all values sampled would always be between 0 and 1.

This does not seem to be the case with DirectX 11. A texture using the DXGI_FORMAT_R32_FLOAT format, for example, seems to be able to store any valid 32bit float, which does make sense from a general perspective because you may not be using that texture (or buffer) for rendering at all.

So how does the rendering pipeline figure out what pixel value is output when you have such an arbitrary range for something like R32_FLOAT, if it is not using the 0 to 1 range? It doesn't seem to be -FLT_MAX to +FLT_MAX as I can render a texture of this type using values between 0.0-65.0 and I do see red in the final result. But debugging the pixel shader and looking at that source texture, only values that get really close to 65.0 actually show as red. The final rendered result on the back buffer, though, has lots of red in it.

Here is a sample source texture, as show in VS graphics debugger:

Source R32_Float Texture

If I render it to the screen just using a basic sampler output for the pixel shader, I get this: Resulting Image

The back-buffer format was R10G10B10A10_UNORM.

So how does it decide what "maximum intensity" is for a floating point texture? Similarly, if you used one of the _SINT formats, how does it deal with that?


Solution