c++opencvcamera-calibrationhomogenous-transformationpose-estimation

Get 3D coordinates from 2D image pixel if extrinsic and intrinsic parameters are known


I am doing camera calibration from tsai algo. I got intrinsics and extrinsics matrices, but how can I reconstruct the 3D coordinates from that information?

enter image description here

I have now 2 ways to find X,Y,Z:

  1. I can use Gaussian Elimination for find X,Y,Z,W and then points will be X/W , Y/W , Z/W as homogeneous system.

  2. I can use the OpenCV documentation approach:

    enter image description here

    As I know u, v, R , t , I can compute X,Y,Z.

However both methods end up in different results that are not correct.

What am I'm doing wrong?


Solution

  • If you got extrinsic parameters then you got everything. That means that you can have Homography from the extrinsics (also called CameraPose). Pose is a 3x4 matrix, homography is a 3x3 matrix, H defined as

    H = K*[r1, r2, t],       //eqn 8.1, Hartley and Zisserman
    

    with K being the camera intrinsic matrix, r1 and r2 being the first two columns of the rotation matrix, R; t is the translation vector.

    Then normalize dividing everything by t3.

    What happens to column r3, don't we use it? No, because it is redundant as it is the cross-product of the 2 first columns of pose.

    Now that you have homography, project the points. Your 2d points are x,y. Add them a z=1, so they are now 3d. Project them as follows:

    p          = [x y 1];
    projection = H * p;                   //project
    projnorm   = projection / p(z);      //normalize