openglglslshadergpudistortion

distortion correction with gpu shader bug


So I have a camera with a wide angle lens. I know the distortion coefficients, the focal length, the optical center. I want to undistort the image I get from this camera. I used OpenCV for the first try (cv::undistort), which worked well, but was way too slow.

Now I want to do this on the gpu. There is a shader doing exactly this documented in http://willsteptoe.com/post/67401705548/ar-rift-aligning-tracking-and-video-spaces-part-5

the formulas can be seen here: http://en.wikipedia.org/wiki/Distortion_%28optics%29#Software_correction

So I went and implemented my own version as a glsl shader. I am sending a quad with texture coordinates on the corners between 0..1. I assume the texture coordinates that arrive are the coordinates of the undistorted image. I calculate the coordinates for the distorted point corresponding to my texture coordinates. Then I sample the distorted image texture.

With this shader nothing in the final image changes. The problem I identified through a cpu implementation is, the coefficient term is very close to zero. The numbers get smaller and smaller through radius squaring etc.. So I have a scaling problem - I can't figure it out what to do differently! I tried everything... I guess it is something quite obvious, since this kind of process seems to work for a lot of people.

I left out the tangential distortion correction for simplicity.

#version 330 core

in vec2 UV;

out vec4 color;

uniform sampler2D textureSampler;

void main()
{   
    vec2 focalLength = vec2(438.568f, 437.699f);
    vec2 opticalCenter = vec2(667.724f, 500.059f);
    vec4 distortionCoefficients = vec4(-0.035109f, -0.002393f, 0.000335f, -0.000449f);

    const vec2 imageSize = vec2(1280.f, 960.f);

    vec2 opticalCenterUV = opticalCenter / imageSize;

    vec2 shiftedUVCoordinates = (UV - opticalCenterUV);

    vec2 lensCoordinates = shiftedUVCoordinates / focalLength;

    float radiusSquared = sqrt(dot(lensCoordinates, lensCoordinates));
    float radiusQuadrupled = radiusSquared * radiusSquared;

    float coefficientTerm = distortionCoefficients.x * radiusSquared + distortionCoefficients.y * radiusQuadrupled;

    vec2 distortedUV = ((lensCoordinates + lensCoordinates * (coefficientTerm))) * focalLength;

    vec2 resultUV = (distortedUV + opticalCenterUV);

    color = texture2D(textureSampler, resultUV);
}

Solution

  • I see two issues with your solution. The main issue is that you mix two different spaces. You seem to work in [0,1] texture space by converting the optical center to that space, but you did not adjust focalLenght. The key point is that for such a distortion model, the focal lenght is determined in pixels. However, now a pixel is not 1 base unit wide anymore, but 1/width and 1/height units, respectively.

    You could add vec2 focalLengthUV = focalLength / imageSize, but you will see that both divisions will cancel out each other when you calculate lensCoordinates. It is much more convenient to convert the texture space UV coordinates to pixel coordinates and use that space directly:

    vec2 lensCoordinates = (UV * imageSize - opticalCenter) / focalLenght;
    

    (and also respectively changing the calculation for distortedUV and resultUV).

    There is still one issue with the approach I have sketched so far: the conventions of that pixel space I mentioned earlier. In GL, the origin will be the lower left corner, while in most pixel spaces, the origin is at the top left. You might have to flip the y coordinate when doing the conversion. Another thing is where exactly pixel centers are located. So far, the code assumes that pixel centers are at integer + 0.5. The texture coordinate (0,0) is not the center of the lower left pixel, but the corner point. The parameters you use for the distortion might (I don't know OpenCV's conventions) assume the pixel centers at integers, so that instead of the conversion pixelSpace = uv * imageSize, you might need to offset this by half a pixel like pixelSpace = uv * imageSize - vec2(0.5).

    The second issue I see is

    float radiusSquared = sqrt(dot(lensCoordinates, lensCoordinates));
    

    That sqrt is not correct here, as dot(a,a) will already give the squared lenght of vector a.