So what I want to do is:
Load a file encrypted with any algorithm (in my case AES-256) into GPU memory (with CUDA).
Decrypt the file with all the GPU parallel awesomeness we have right now and let it stay in GPU memory.
Now tell OpenGL (4.3) that there is a texture in memory that needs to be read and decompressed from DDS DXT5.
Point 3 is where I have my doubts. Since to load a compressed DDS DXT5 in OpenGL one has to call openGL::glCompressedTexImage[+ 2D|3D|2DARB...] with the compression type (GL_COMPRESSED_RGBA_S3TC_DXT5_EXT) and a pointer to the image data buffer.
So, to make it short -> is there a way to pass a texture buffer address already in GPU memory to OpenGL (in DDS format)? Without this option, I would need to transfer the AES decrypted file back to the CPU and tell OpenGL to load it again into the GPU....
Many thanks for any help or short examples ;)
You need to do two things.
First, you must ensure synchronization and visibility for the operation that generates this data. If you're using a compute shader to generate the data into an SSBO, buffer texture, or whatever, then you'll need to use glMemoryBarrier
, with the GL_PIXEL_BUFFER_BARRIER_BIT
set. If you're generating this data via a rendering operation to a buffer texture, then you won't need an explicit barrier. But if the FS is writing to an SSBO or via image load/store, you'll still need the explicit barrier as described above.
If you're using OpenCL, then you'll have to employ OpenCL's OpenGL interop functionality to make the result of the CL operation visible to GL.
Once that's done, you just use the buffer as a pixel unpack buffer, just as you would for any asynchronous pixel transfer operation. Compressed textures work with GL_PIXEL_UNPACK_BUFFER
just like uncompressed ones.
Remember: in OpenGL, all buffer objects are the same. OpenGL doesn't care if you use a buffer as an SSBO one minute, then use it for pixel transfers the next. As long as you synchronize it, everything is fine.