pythonopencv

How does OpenCV projectPoints perform transformations before projecting?


I have two 3d points that I am trying to project onto a 2d image plane using cv::projectPoints(). Originally the points are not in the cameras frame of reference so I have to transform them. I am testing the validity of the method's transformations.

First I manually apply a translation to my points as well as a -90 degree rotation about the x axis via rotation matrix matr.

import numpy as np
import math
import cv2

# center of projection
cop = np.array([-14.45194, 34.59882, 19.11343])

# get rotation
r_vec = cv2.Rodrigues(np.array(matr))[0]
print([round(math.degrees(i),2) for i in r_vec])

# init arrays
coords = np.array([[4.27874, 115.15968, 18.1621], [27.52924, 113.3441, 17.70207]])
transformed_coords = np.zeros(coords.shape)

# transform coords
for b, _ in enumerate(coords):

    arr = np.array([0,0,0])

    # translate
    for r in range(3):
        arr[r] = coords[b][r] - cop[r]

    # rotate
    transformed_coords[b] = np.dot(matr, arr)

Next I pass in the transformed coords into projectPoints() and compare the resulting 2d points with the points I get by passing the transformation into the method.

points_2d = cv2.projectPoints(np.array(transformed_coords), np.array([0.0,0.0,0.0]), np.array([0.0,0.0,0.0]), cam_matrix, distortion)[0]
print("Manual Transformation Projection: ")
print(points_2d )

points_2d = cv2.projectPoints(np.array(coords), np.array(r_vec), np.array(cop), cam_matrix, distortion)[0]
print("\nOpenCV Transformation Projection: ")
print(points_2d )

Output:

[-90.0, 0.0, 0.0] # matr rotation 

Manual Transformation Projection: 
[[[596.41419111 538.38054858]]

 [[159.74685131 557.65317027]]]

OpenCV Transformation Projection: 
[[[1101.1539809  -274.07081182]]

 [[ 738.45477039 -281.42273082]]]

Why are they different?

By the way heres the cam matrix and distortion if you want to recreate it:

cam_matrix = np.array([[1561.9015217711233, 0, 944.3790845611046], [0, 1557.8348925840205, 538.3374859400157], [0, 0, 1]])
distortion = np.array([-0.2136432557736835, 0.20055112514542725, 0.00054631323043295, -0.00067835485282051, -0.07781645541334031])

Solution

  • to clarify a few things:

    Issues you should investigate:

    If you have a world frame, and both your camera and object are defined in the world frame, then you have T_world_cam and T_world_obj.

    For projectPoints(), you need T_cam_obj, or camTobj (math notation).

    This transformation...

    The equations:

    T_cam_obj = T_cam_world @ T_world_obj
    T_cam_obj = inv(T_world_cam) @ T_world_obj # np.linalg.inv
    

    Write utility functions that convert between rvec,tvec (for OpenCV) and 4x4 matrix representation (for calculating without going insane):

    def rtvec_to_matrix(rvec=(0.0, 0.0, 0.0), tvec=(0.0, 0.0, 0.0)):
        "Convert rotation vector and translation vector to 4x4 matrix"
        rvec = np.asarray(rvec)
        tvec = np.asarray(tvec)
    
        T = np.eye(4)
        (R, jac) = cv.Rodrigues(rvec)
        T[:3, :3] = R
        T[:3, 3] = tvec.squeeze()
        return T
    
    def matrix_to_rtvec(matrix):
        "Convert 4x4 matrix to rotation vector and translation vector"
        (rvec, jac) = cv.Rodrigues(matrix[:3, :3])
        tvec = matrix[:3, 3]
        return rvec, tvec