pythonopencv

Object points and Image points in OpenCV calibrateCamera


I would like some clarification on the parameters for OpenCV's calibrateCamera function. The function is cv.CalibrateCamera2(objectPoints, imagePoints, pointCounts, imageSize, cameraMatrix, distCoeffs, rvecs, tvecs, flags=0)

The 'imagePoints' are the 'detected' corners in the planar calibration pattern, in my understanding. But I don't understand the role of the objectPoints in helping us recover the cameraMatrix, and the way their values are set.


Solution

  • In summary

    Also, it is worth noting that "Currently, initialization of intrinsic parameters (when CV_CALIB_USE_INTRINSIC_GUESS is not set) is only implemented for planar calibration patterns (where Z-coordinates of the object points must be all zeros). 3D calibration rigs can also be used as long as initial cameraMatrix is provided." So if you are not providing camera focal length (fx, fy) and image center (cx, cy) intrinsic parameters, you have to use a planar (Z=0) calibration pattern.

    Looking at objectPoints definition in details

    References