opencvcamera-calibrationfisheye

The functions distortPoints and undistortPoints are not inverse of each other


I am trying to understand the camera calibration/3D reconstruction and facing a strange behavior of cv::fisheye::distort/undistortPoints functions. I would expect that the fisheye model moves a point along the ray connecting it to the principal point (cx, cy), however, it doesn't. Moreover, the functions cv::fisheye::distortPoints and cv::fisheye::undistortPoints are not inverse of each other (as one would expect).

The following code creates a camera matrix with distortion coefficients and undistorts and then distorts back an arbitrary point. The values for the camera intrinsics and distortion coefficients were taken from a public dataset.

cv::Mat camera_matrix = cv::Mat::zeros(3,3,CV_64F);
camera_matrix.at<double>(0,0) = 190.9784;
camera_matrix.at<double>(1,1) = 190.9733;
camera_matrix.at<double>(0,2) = 254.9317;
camera_matrix.at<double>(1,2) = 256.8974;
camera_matrix.at<double>(2,2) = 1;

std::cout << "Camera matrix: \n" << camera_matrix << "\n" <<std::endl;

cv::Mat distortion_coefficients(4,1,CV_64F);
distortion_coefficients.at<double>(0) = 0.003482;
distortion_coefficients.at<double>(1) = 0.000715;
distortion_coefficients.at<double>(2) = -0.0020532;
distortion_coefficients.at<double>(3) = 0.000203;

std::cout << "Distortion coefficients\n"<< distortion_coefficients<< "\n" << std::endl;

cv::Mat original_point(1,1,CV_64FC2);
original_point.at<cv::Point2d>(0).x= 7.7;
original_point.at<cv::Point2d>(0).y= 9.9;
cv::Mat undistorted, distorted;
cv::fisheye::undistortPoints(original_point, undistorted, camera_matrix, 
            distortion_coefficients, cv::Mat(), camera_matrix);
cv::fisheye::distortPoints(undistorted, distorted, camera_matrix, distortion_coefficients);

std:: cout << "Original point: " << original_point.at<cv::Point2d>(0).x << " " << original_point.at<cv::Point2d>(0).y << std::endl;
std:: cout << "Undistorted point: " << undistorted.at<cv::Point2d>(0).x << " " << undistorted.at<cv::Point2d>(0).y<< std::endl;
std:: cout << "Distorted point: " << distorted.at<cv::Point2d>(0).x << " " << distorted.at<cv::Point2d>(0).y;

The result of this is

Camera matrix: 
[190.9784, 0, 254.9317;
 0, 190.9733, 256.8974;
 0, 0, 1]

Distortion coefficients
[0.003482;
 0.000715;
 -0.0020532;
 0.000203]

Original point: 7.7 9.9
Undistorted point: 8905.69 8899.45
Distorted point: 464.919 466.732

The point that is near the top left corner is moved far bottom right.

Is this a bug or I do not understand something?

cv::fisheye::undistortImage is working on the dataset images - the curves are turned back into lines.

What am I missing?


Solution

  • You are missing two things.

    1. You are using the incorrect parameters for the fisheye::distortPoints() function. You need normalized points. From the docs, but not very clear:

    Note that the function assumes the camera intrinsic matrix of the undistorted points to be identity. This means if you want to transform back points undistorted with undistortPoints() you have to multiply them with P−1.

    1. You need to realize not all points in the distorted image end up in the undistorted image. The extrapolation only works to a degree and beyond a certain point the undistortion and redistortion won't be each other's inverse.

    In order normalize the points you first need to to homogenize the points (convert to 3d) and multiply the inverse matrix with each homogenized point to get normalized points.

    You can use fisheye::estimateNewCameraMatrixForUndistortRectify for a new camera matrix to adjust the balance between valid pixels in source and destination. But you need to use this new camera matrix for Knew in undistortImage if you want the undistorted points to match those in the undistorted image.

    cv::Mat k = cv::Mat::zeros(3,3,CV_64F);
    k.at<double>(0,0) = 190.9784;
    k.at<double>(1,1) = 190.9733;
    k.at<double>(0,2) = 254.9317;
    k.at<double>(1,2) = 256.8974;
    k.at<double>(2,2) = 1;
    
    std::cout << "Camera matrix: \n" << k << "\n" <<std::endl;
    
    cv::Mat d(4,1,CV_64F);
    d.at<double>(0) = 0.003482;
    d.at<double>(1) = 0.000715;
    d.at<double>(2) = -0.0020532;
    d.at<double>(3) = 0.000203;
    
    std::cout << "Distortion coefficients\n"<< d<< "\n" << std::endl;
    
    
    cv::Mat points_original(1,4,CV_64FC2);
    points_original.at<cv::Point2d>(0).x= 7.7;
    points_original.at<cv::Point2d>(0).y= 9.9;
    points_original.at<cv::Point2d>(1).x= 30;
    points_original.at<cv::Point2d>(1).y= 30;
    points_original.at<cv::Point2d>(2).x= 40;
    points_original.at<cv::Point2d>(2).y= 40;
    points_original.at<cv::Point2d>(3).x= 50;
    points_original.at<cv::Point2d>(3).y= 50;
    
    cv::Mat nk;
    
    // float balance = 1;
    // fisheye::estimateNewCameraMatrixForUndistortRectify(k,d,cv::Size(512,512),Mat::eye(3,3,CV_64FC1),nk,balance);
    
    nk = k;
    
    std::cout << "New Camera matrix: \n" << nk << "\n" <<std::endl;
    
    cv::Mat points_undistorted, points_redistorted;
    cv::fisheye::undistortPoints(points_original,points_undistorted,k,d,cv::Mat(),nk);
    
     // {x,y} -> {x,y,1}
    std::vector<Point3d> points_undistorted_homogeneous;
    convertPointsToHomogeneous(points_undistorted, points_undistorted_homogeneous);
    
    Mat cam_intr_inv = nk.inv();
    
    for(int i=0;i<points_undistorted_homogeneous.size();++i){
        Mat p(Size(1,3),CV_64FC1);
        p.at<double>(0,0) = points_undistorted_homogeneous[i].x;
        p.at<double>(1,0) = points_undistorted_homogeneous[i].y;
        p.at<double>(2,0) = points_undistorted_homogeneous[i].z;
    
        Mat q = cam_intr_inv*p;
    
        points_undistorted_homogeneous[i].x = q.at<double>(0,0);
        points_undistorted_homogeneous[i].y = q.at<double>(1,0);
        points_undistorted_homogeneous[i].z = q.at<double>(2,0);    
    }
    
    std::vector<Point2d> points_undistorted_normalized;
    convertPointsFromHomogeneous(points_undistorted_homogeneous,points_undistorted_normalized);
    
    fisheye::distortPoints(points_undistorted_normalized, points_redistorted,k, d);
    
    for(int i = 0;i<points_original.size().width;++i){
        std:: cout << "Original point: " << points_original.at<cv::Point2d>(i) << "\n";
        std:: cout << "Undistorted point: " << points_undistorted.at<cv::Point2d>(i) << "\n";
        std:: cout << "Redistorted point: " << points_redistorted.at<cv::Point2d>(i) << "\n\n";
    }  
    

    Result:

    Original point: [7.7, 9.9]
    Undistorted point: [8905.69, 8899.45]
    Redistorted point: [463.048, 464.816]
    
    Original point: [30, 30]
    Undistorted point: [8125.4, 8196.15]
    Redistorted point: [461.864, 465.638]
    
    Original point: [40, 40]
    Undistorted point: [7775.49, 7846.24]
    Redistorted point: [461.725, 465.582]
    
    Original point: [50, 50]
    Undistorted point: [-3848.98, -3886.37]
    Redistorted point: [50, 50]
    

    As you see the first 40 points from the corner fail. Perhaps you can improve this by calibrating again and adding more chessboard corners at the edges of the images (at various angles).