c++openglcameracamera-matrix

OpenGL - Trouble with first person camera matrix


I have tried my best to create a camera that mimics the style of a first person camera. I have just switched from the old OpenGL method of rendering and am now ready to tackle a camera matrix. Here is my code for my camera update.

void Camera::update(float dt)
{
// Get the distance the camera has moved
float distance = dt * walkSpeed;

// Get the current mouse position
mousePos = mouse->getPosition();

// Translate the change to yaw and pitch
angleYaw -= ((float)mousePos.x-400.0f)*lookSpeed/40;
anglePitch -= ((float)mousePos.y-300.0f)*lookSpeed/40;

// Clamp the camera to a max/min viewing pitch
if(anglePitch > 90.0f)
    anglePitch = 90.0f;

if(anglePitch < -90.0f)
    anglePitch = -90.0f;

// Reset the mouse position
mouse->setPosition(mouseReset);

// Check for movement events
sf::Event event;
while (window->pollEvent(event))
{

    // Calculate the x, y and z values of any movement
    if (event.type == sf::Event::KeyPressed && event.key.code == sf::Keyboard::W)
    {
        position.x -= (float)sin(angleYaw*M_PI/180)*distance*25;
        position.z += (float)cos(angleYaw*M_PI/180)*distance*25;
        position.y += (float)sin(anglePitch * M_PI / 180) * distance * 25;
        angleYaw = 10.0;
    }
    if (event.type == sf::Event::KeyPressed && event.key.code == sf::Keyboard::S)
    {
        position.x += (float)sin(angleYaw*M_PI/180)*distance*25;
        position.z -= (float)cos(angleYaw*M_PI/180)*distance*25;
        position.y -= (float)sin(anglePitch * M_PI / 180) * distance * 25;
    }
    if (event.type == sf::Event::KeyPressed && event.key.code == sf::Keyboard::R)
    {
        position.x += (float)cos(angleYaw*M_PI/180)*distance*25;
        position.z += (float)sin(angleYaw*M_PI/180)*distance*25;
    }
    if (event.type == sf::Event::KeyPressed && event.key.code == sf::Keyboard::A)
    {
        position.x -= (float)cos(angleYaw*M_PI/180)*distance*25;
        position.z -= (float)sin(angleYaw*M_PI/180)*distance*25;
    }
}

// Update our camera matrix
camMatrix = glm::translate(glm::mat4(1.0f), glm::vec3(-position.x, -position.z, -position.y));
camMatrix = glm::rotate(camMatrix, angleYaw, glm::vec3(0, 1, 0));
camMatrix = glm::rotate(camMatrix, anglePitch, glm::vec3(1, 0, 0));
}

The last 3 lines are what I assumed would update the camera with the opposite of the translation (y, and z switched for the format I am working with). Did I do them in the wrong order?

Here is my very simple shader:

#version 120

attribute vec4 position;
uniform mat4 camera;

void main()
{
    gl_Position = position * camera;
}

#version 120
void main(void)
{
    gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}

This simply makes a red triangle. The camera sort of rotates around the triangle, which is not what I want. I want it to rotate the camera. I thought multiplying a camera matrix by each vertex would give the rendering in camera space. Or do I need to multiply it by the projection matrix as well?

Moving w, a, s, or d zooms in really close all at once and distorts the whole view with red fragments everywhere.


Solution

  • Write your matrix operations in reverse order. So if you want to translate (to camera position) and then rotate, write it in this order:

    // Update our camera matrix
    camMatrix = glm::rotate(glm::mat4(1.0f), anglePitch, glm::vec3(1, 0, 0));
    camMatrix = glm::rotate(camMatrix, angleYaw, glm::vec3(0, 1, 0));
    camMatrix = glm::translate(camMatrix, glm::vec3(-position.x, -position.z, -position.y));