opencvcamera-calibrationhomogenous-transformation

How openCV estimates focal length using object points and image points


I'm trying to display 3D chessboard image(involved rotation, translation) like matlab's camera calibration tool box, using openCV, openGL.

To make this, I'm studying camera calibration, homogeneous, etc.. My question is the function calibrateCamera(), I wonder how openCV calculates(estimates) focal length using only chessboard corner's object points(vec3), image points(vec2).

please tell me any equation or principle..

sorry to my bad english Thank you


Solution

  • The set of equations you are looking for called collinearity equations. These "relate coordinates in a sensor plane (in two dimensions) to object coordinates (in three dimensions). The equations originate from the central projection of a point of the object through the optical centre of the camera to the image on the sensor plane." (wikipedia.com)

    The exact form of these equations used in OpenCV can be found here. There exist different types of models (e.g. pinhole model, fisheye model, only modeling radial distortion) for different types of cameras.