computer-visioncamera-calibrationroboticscalibration

Targetless non-overlapping stereo camera calibration


Overlapping targetless stereo camera calibration can be done using feautre matchers in OpenCV and then using the 8-point or 5-point algoriths to estimate the Fundamental/Essential matrix and then use those to further decompose the Rotation and Translation matrices.

How to approach a non-overlapping stereo setup without a target?

Can we use visual odometry (like ORB SLAM) to calculate trajectory of both the cameras (cameras would be rigidly fixed) and then use hand-eye calibration to get the extrinsics? If yes, how can the transformations of each trajectory mapped to the gripper->base transformation and target->camera transformation? Or is there another way to apply this algorithm?

If hand-eye calibration cannot be used, is there any recommendations to achieve targetless non-overlapping stereo camera calibration?


Solution

  • Hand-eye calibration is enough for your case. Just get the trajectory from each camera by running ORBSLAM. Then, calculate the relative trajectory poses on each trajectory and get extrinsic by SVD. You might need to read some papers to see how to implement this.

    This is sometimes called motion-based calibration.