I have a stereo camera system composed of two cameras placed perpendicular to each other but pointing at a common center. I have successfully camera calibrated each camera, and have also stereo calibrated the same system with OpenCV using a ChArUco board whose unit squares are 13 cm x 13 cm. I received a stereo calibration error around 1.39 pixels. I never specified what the size of the unit squares were.
I also used OpenCV to track an object of interest whose footage I captured using synchronous image acquisition with those same two cameras (performed as a post-process). I have successfully triangulated those same 2D points with OpenCV. In short, I undistorted the 2D point array from each camera's images, calculated the projection matrices using the left camera as the origin for the system, and K2 [R | t] as the projection matrix for camera 2. I then converted the resulting vector from the triangulation function to world coordinates and plotted the result (which is attached).
I know for certain the sphere fell at least 4 inches, but the z-distance is incorrect in the plot provided. According to that plot the sphere barely fell .2 units.
My question is, what am I looking at? What units are these coordinates in? Can I convert them? How do I specify or convert whatever units the output vector is output in?
TLDR: What are the expected units for the output 4D vector from cv2.triangulatePoints() function (i.e. in pixels, cm, etc)? How do I convert whatever the output is to a desired unit?
3D Trajectory
I have tried searching on the web what others have said and all that I have found is that I can specify the units in the translation vector, t. But I am not sure how I can do this?
Edit: As requested here is a snippet of the code where a Calibration object is created. Since my unit squares are 13cm x 13cm do I scale self.objp
by 13?
class Calibration(object):
def __init__(self, cam1_path, cam2_path):
self.criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_MAX_ITER, 30, 1e-5)
self.chessboard_size = (5, 7) # internal corners (height, width)
self.search_window = (20,20,)
self.objp = np.zeros(
(self.chessboard_size[0] * self.chessboard_size[1], 3), np.float32
)
self.objp[:, :2] = np.mgrid[
0 : self.chessboard_size[0], 0 : self.chessboard_size[1]
].T.reshape(-1, 2)
# Arrays to store object points and image points from all the images.
self.objpoints = [] # 3d points in real world space
self.imgpoints_1 = [] # 2d points in image plane.
self.imagepoints_2 = [] # 2d points in image plane.
self.cal_path1, self.cal_path2 = cam1_path, cam2_path
self.calibrate_cam(self.cal_path1, self.cal_path2)
The defining parameter for the units of your triangulation results is the camera extrinsic translation. If that translation vector is in meters (generally recommended), your 3D points will have units of meters. The extrinsic translation will have the same units as your target world points.
In your case, your world points are defined using
np.mgrid[0 : self.chessboard_size[0], 0 : self.chessboard_size[1]].T.reshape(-1, 2)
, implying one unit per chessboard field. Since these are 1 cm in your case, triangulation results will come out as cm.
I have written some articles on calibration which you can find here: https://calib.io/blogs/knowledge-base