I am writing a Phong lighting shader and I have a hard time deciding whether the value I pass to gl_FragColor
should be normalized or not.
If I use normalized values, the lighting is a bit weird. For example an object far away from a light source (un-lighted) would have its color determined by the sum of the emissive component, ambient component and global ambiental light. Let us say that adds up to (0.3, 0.3, 0.3)
. The normal for this is roughly (0.57, 0.57, 0.57)
, which is quite more luminous than what I'm expecting.
However, if I use non-normalized values, for close objects the specular areas get really, really bright and I have to make sure I generally use low values for my material constants.
As a note, I am normalizing only the RGB component and the alpha component is always 1.
I am a bit swamped and I could not find anything related to this. Either that or my searches were completely wrong.
No. Normalizing the color creates an interesting effect, but I think you don't really want it most, if not all of the time.
Normalization of the color output causes loss of information, even though it may seem to provide greater detail to a scene in some cases. If all your fragments have their color normalized, it means that all RGB vectors have their norm equal to 1
. This means that there are colors that simply cannot exist in your output: white (norm = sqrt(3)
), bright colors such as yellow (norm = sqrt(2)
), dark colors such as dark red (norm(0.5, 0.0, 0.0) = 0.5
), etc. Another issue you may face is normalizing zero vectors (i.e. black).
Another way to understand why color normalization is wrong, think about the less general case of rendering a grayscale image. As there is only one color component, normalization does not really make sense at all as it would make all your colors 1.0
.
The problem with using the values without normalization arises from the fact that your output image has to have its color values clamped to a fixed interval: [0, 255]
or [0.0, 1.0]
. As the specular parts of your object reflect more light than those that only reflect diffuse light, quite possibly the computed color value may excede even (1.0, 1.0, 1.0)
and get clamped to white for most of the specular area, therefore these areas become, perhaps, too bright.
A simple solution would be to lower the material constant values, or the light intensity. You could go one step further and make sure that the values for the material constants and light intensity are chosen such that the computed color value cannot excede (1.0, 1.0, 1.0)
. The same result could be achieved with a simple division of the computed color value if consistent values are used for all the materials and all the lights in the scene, but it is kind of overkill, as the scene would probably be too dark.
The more complex, but better looking solution involves HDR rendering and exposure filters such as bloom to obtain more photo-realistic images. This basically means rendering the scene into a float buffer which can handle a greater range than the [0, 255]
RGB buffer, then simulating the camera or human eye behavior of adapting to a certain light intensity and the image artefacts caused by this mechanism (i.e. bloom).