openglcolorsgammasrgb

How do I input color values in sRGB space?


I am learning about the sRGB color space in OpenGL.

One thing is colors from textures, but other - direct color values, let's say from graphics editor.

A color component of 0.5 means that the output on the screen will be a lighter color, 187.

What is the best practice here - do I have to decode colors before usage by calculating a root?


Solution

  • When the textures are specified with GL_SRGB8 or GL_SRGB8_ALPHA8 internal formats then OpenGL handles sRGB to linear conversion for you as you sample the texture. For all other color values (uniforms, vertex attributes, clear colors, etc) you'll need to do the conversion manually.

    The conversion from sRGB to linear is simple. You can look up the function on wikipedia and translate it to whatever language you'd like. For example in C:

    float srgb_to_linear(float x) {
        return x <= 0.04045f ? x / 12.92f : powf((x + 0.055f)/1.055f, 2.4f);
    }
    

    This converts one component of sRGB in [0,1] range to linear in [0,1] range. To convert from 8-bit sRGB as most graphic editors present you, divide each component by 255. and then apply the above function to each of them:

    void srgb8_to_linear(uint8_t in[3], float out[3]) {
        for(int i = 0; i < 3; ++i)
            out[i] = srgb_to_linear(in[i]/255.f);
    }
    

    If performance matters, it might be beneficial to precompute a lookup table of all the 256 different values.

    Notice that it is important to convert to linear sooner rather than later. E.g. if you pass those colors as interpolated vertex attributes, then you should store the linear values in the VBO or otherwise you'll get incorrect interpolation.