pythonopenglpyopengl

16-bit texture through PyOpenGL


I am trying to create a 16-bit texture in PyOpenGL using below statement.

img_data = cv2.imread("Texture.png",cv2.IMREAD_UNCHANGED)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16UI, width, height, 0, GL_RGB, GL_UNSIGNED_SHORT, img_data)

But it throws invalid operation with traceback a sbelow:

Traceback (most recent call last):
  File "C:\Users\gurubhat\PycharmProjects\OpenGL\TextureSample.py", line 106, in <module>
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16UI, width, height, 0, GL_RGB, GL_UNSIGNED_SHORT, img_data)
  File "src\latebind.pyx", line 39, in OpenGL_accelerate.latebind.LateBind.__call__
  File "src\wrapper.pyx", line 318, in OpenGL_accelerate.wrapper.Wrapper.__call__
  File "src\wrapper.pyx", line 311, in OpenGL_accelerate.wrapper.Wrapper.__call__
  File "C:\Users\gurubhat\PycharmProjects\VENV\venv\Lib\site-packages\OpenGL\platform\baseplatform.py", line 415, in __call__
    return self( *args, **named )
           ^^^^^^^^^^^^^^^^^^^^^^
  File "src\errorchecker.pyx", line 58, in OpenGL_accelerate.errorchecker._ErrorChecker.glCheckError
OpenGL.error.GLError: GLError(
    err = 1282,
    description = b'invalid operation',
    baseOperation = glTexImage2D,

16-bit textures are not supported on Window? (or may be graphics card?)

Loading of 8-bit images like below works perfectly fine (I guess GL_RGB stores textures internlly in float32 format)

img_data = cv2.imread("8bitimage.png",cv2.IMREAD_UNCHANGED)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, img_data)

Solution

  • You get an invalid operation error, because your internal format is integral (GL_RGB16UI), but the specified format of texture is not (GL_RGB). See glTexImage2D and the OpenGL 4.6 API Specification

    An INVALID_OPERATION error is generated if the internal format is integer and format is not one of the integer formats listed in table 8.8, or if the internal format is not integer and format is an integer format.

    Change the internal format to a floating point format (GL_RGB16/GL_RGB)

    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16, width, height, 0, GL_RGB, GL_UNSIGNED_SHORT, img_data)
    

    or the format to an integer format (GL_RGB16UI/GL_RGB_INTEGER)

    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16UI, width, height, 0, GL_RGB_INTEGER, GL_UNSIGNED_SHORT, img_data)
    

    In the shader, a sampler of type sampler2D must be used for the floating point texture and a sampler of type usampler2D must be used for the unsigned integral texture. Also see Sampler (GLSL).


    I guess GL_RGB stores textures internally in float32 format

    No it does not. The texture usually has 8 bits per channel internally as long as no sized internal format is specified. e.g.: GL_RGBA16. See glTexImage2D.