c++openglvirtual-realityopenvrosvr

How to use from OSVR in C++?


I want to use from OSVR plugin, but I don't know how it exactly works.

In OpenVR I have a framebuffer for each eye and when I write something on that buffers then I see them in glasses and I'm using from HTC Vive.

But now I don't know where those buffers are and how I can change VR eyes content, and I installed OSVR server and OSVR Vive plugin correctly but even this simple example doesn't work correctly, and I don't see anything in VR:

#include <osvr/ClientKit/ClientKit.h>
#include <osvr/ClientKit/Display.h>
#include "SDL2Helpers.h"
#include "OpenGLCube.h"

#include <SDL.h>
#include <SDL_opengl.h>

#include <iostream>

static auto const WIDTH = 1920;
static auto const HEIGHT = 1080;

// Forward declarations of rendering functions defined below.
void render(osvr::clientkit::DisplayConfig &disp);
void renderScene();

int main(int argc, char *argv[]) {
    namespace SDL = osvr::SDL2;

    // Open SDL
    SDL::Lib lib;

    // Use OpenGL 2.1
    SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 2);
    SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 1);

    // Create a window
    auto window = SDL::createWindow("OSVR", SDL_WINDOWPOS_UNDEFINED,
                                    SDL_WINDOWPOS_UNDEFINED, WIDTH, HEIGHT,
                                    SDL_WINDOW_OPENGL | SDL_WINDOW_SHOWN);
    if (!window) {
        std::cerr << "Could not create window: " << SDL_GetError() << std::endl;
        return -1;
    }

    // Create an OpenGL context and make it current.
    SDL::GLContext glctx(window.get());

    // Turn on V-SYNC
    SDL_GL_SetSwapInterval(1);

    // Start OSVR and get OSVR display config
    osvr::clientkit::ClientContext ctx("com.osvr.example.SDLOpenGL");
    osvr::clientkit::DisplayConfig display(ctx);
    if (!display.valid()) {
        std::cerr << "\nCould not get display config (server probably not "
                     "running or not behaving), exiting."
                  << std::endl;
        return -1;
    }

    std::cout << "Waiting for the display to fully start up, including "
                 "receiving initial pose update..."
              << std::endl;
    while (!display.checkStartup()) {
        ctx.update();
    }
    std::cout << "OK, display startup status is good!" << std::endl;

    // Event handler
    SDL_Event e;
#ifndef __ANDROID__ // Don't want to pop up the on-screen keyboard
    SDL::TextInput textinput;
#endif
    bool quit = false;
    while (!quit) {
        // Handle all queued events
        while (SDL_PollEvent(&e)) {
            switch (e.type) {
            case SDL_QUIT:
                // Handle some system-wide quit event
                quit = true;
                break;
            case SDL_KEYDOWN:
                if (SDL_SCANCODE_ESCAPE == e.key.keysym.scancode) {
                    // Handle pressing ESC
                    quit = true;
                }
                break;
            }
            if (e.type == SDL_QUIT) {
                quit = true;
            }
        }

        // Update OSVR
        ctx.update();

        // Render
        render(display);

        // Swap buffers
        SDL_GL_SwapWindow(window.get());
    }

    return 0;
}

/// @brief A simple dummy "draw" function - note that drawing occurs in "room
/// space" by default. (that is, in this example, the modelview matrix when this
/// function is called is initialized such that it transforms from world space
/// to view space)
void renderScene() { draw_cube(1.0); }

/// @brief The "wrapper" for rendering to a device described by OSVR.
///
/// This function will set up viewport, initialize view and projection matrices
/// to current values, then call `renderScene()` as needed (e.g. once for each
/// eye, for a simple HMD.)
void render(osvr::clientkit::DisplayConfig &disp) {

    // Clear the screen to black and clear depth
    glClearColor(0, 0, 0, 1.0f);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    /// For each viewer, eye combination...
    disp.forEachEye([](osvr::clientkit::Eye eye) {

        /// Try retrieving the view matrix (based on eye pose) from OSVR
        double viewMat[OSVR_MATRIX_SIZE];
        eye.getViewMatrix(OSVR_MATRIX_COLMAJOR | OSVR_MATRIX_COLVECTORS,
                          viewMat);
        /// Initialize the ModelView transform with the view matrix we
        /// received
        glMatrixMode(GL_MODELVIEW);
        glLoadIdentity();
        glMultMatrixd(viewMat);

        /// For each display surface seen by the given eye of the given
        /// viewer...
        eye.forEachSurface([](osvr::clientkit::Surface surface) {
            auto viewport = surface.getRelativeViewport();
            glViewport(static_cast<GLint>(viewport.left),
                       static_cast<GLint>(viewport.bottom),
                       static_cast<GLsizei>(viewport.width),
                       static_cast<GLsizei>(viewport.height));

            /// Set the OpenGL projection matrix based on the one we
            /// computed.
            double zNear = 0.1;
            double zFar = 100;
            double projMat[OSVR_MATRIX_SIZE];
            surface.getProjectionMatrix(
                zNear, zFar, OSVR_MATRIX_COLMAJOR | OSVR_MATRIX_COLVECTORS |
                                 OSVR_MATRIX_SIGNEDZ | OSVR_MATRIX_RHINPUT,
                projMat);

            glMatrixMode(GL_PROJECTION);
            glLoadIdentity();
            glMultMatrixd(projMat);

            /// Set the matrix mode to ModelView, so render code doesn't
            /// mess with the projection matrix on accident.
            glMatrixMode(GL_MODELVIEW);

            /// Call out to render our scene.
            renderScene();
        });
    });

    /// Successfully completed a frame render.
} 

Anybody knows how it works?


Solution

  • Instead of the display config API, use osvrRenderManager to get render info and present frames to the HMD. The display config API is a lower level API, and doesn't handle things like window placement for extended mode rendering on the Vive or direct mode rendering, or adjustments to projection based on render target scaling factors. This is typically handled by the RenderManager API.

    https://github.com/sensics/osvr-rendermanager