I would like to use OpenGL (version 1.5) to render images to memory, without displaying them on screen (I can e.g. just save them as image files or render them as ASCII in terminal). I do not want any I/O. I've found similar question at SO but none that addresses my specific further requirements which will follow.
Now I think I could use a library like glx and tell it to not open any window, however I also don't want my code to depend on any windowing system library like X11, because my program simply doesn't do anything with any windows or I/O, I don't see why my program should be burdened by a dependency on X window (as some systems simply don't have X window, they may even have no graphical interface at all). My programs should only depend on an OpenGL driver.
I understand for this I need to create an OpenGL context, which is not part of OpenGL and which is something that's platform-dependent, so actually I might need some library for creating an OpenGL render-to-memory context ideally in a multiplatfom way (i.e. abstracting away the platform-dependent stuff). Does anything like this exist? (I am not interested in any proprietary, GPU-specific or driver-specific software, the program should run on any GPU that supports given OpenGL version.) Is there something else I should consider?
Basically I want my program to be very minimal and not burdened by what it doesn't need, given that all it needs is to use a generic OpenGL driver to render an image into memory, and should work on any system having such OpenGL driver.
Depending on the operating system you're using and the availability of drivers, you can do pure, headless, GPU accelerated OpenGL rendering using EGL. Nvidia has a nice developer blog about how to do it at https://developer.nvidia.com/blog/egl-eye-opengl-visualization-without-x-server/
The gist of it is, to create a EGL context on a display device without associating it with an output. Source (copied directly from the linked article):
#include <EGL/egl.h>
static const EGLint configAttribs[] = {
EGL_SURFACE_TYPE, EGL_PBUFFER_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_DEPTH_SIZE, 8,
EGL_RENDERABLE_TYPE, EGL_OPENGL_BIT,
EGL_NONE
};
static const int pbufferWidth = …;
static const int pbufferHeight = …;
static const EGLint pbattr[] = {
EGL_WIDTH, pbufferWidth,
EGL_HEIGHT, pbufferHeight,
EGL_NONE,
};
int main(int argc, char *argv[])
{
EGLDisplay eglDpy = eglGetDisplay(EGL_DEFAULT_DISPLAY);
EGLint major, minor;
eglInitialize(eglDpy, &major, &minor);
EGLint numConfigs;
EGLConfig eglCfg;
eglChooseConfig(eglDpy, configAttribs, &eglCfg, 1, &numConfigs);
EGLSurface eglSurf = eglCreatePbufferSurface(eglDpy, eglCfg, pbattr);
eglBindAPI(EGL_OPENGL_API);
EGLContext eglCtx = eglCreateContext(eglDpy, eglCfg, EGL_NO_CONTEXT, NULL);
eglMakeCurrent(eglDpy, eglSurf, eglSurf, eglCtx);
do_opengl_stuff();
eglTerminate(eglDpy);
return 0;
}
If you don't have access to EGL, but your OS and your GPU is supported by Linux DRM/DRI, you could go the KMS/GBM route and worth with framebuffer objects obtained through the extension mechanism (well, with Mesa you can just use them as if they were non extensions, even with OpenGL-1.x). The kmscube demo has a "surfaceless" mode, which demonstrates doing exactly that.
In short: EGL is the "clean" way do to it. KMS is the "hacky" way to do it.
Another option, probably completely outside of your scope right now, would be to use Vulkan, where strictly speaking, headless rendering is the "default", and methods for getting stuff on-screen are actual extensions to the specification:
VK_KHR_wayland_surface
VK_KHR_xcb_surface
VK_KHR_xlib_surface
VK_KHR_win32_surface
VK_EXT_metal_surface