computer-visioncamera-calibrationextrinsic-parameters

camera extrinsic calibration


I have a fisheye camera, which I have already calibrated. I need to calculate the camera pose w.r.t a checkerboard just by using a single image of said checkerboard,the intrinsic parameters, and the size of the squares of the checkerboards. Unfortunately many calibration libraries first calculate the extrinsic parameters from a set of images and then the intrinsic parameters, which is essentially the "inverse" procedure of what I want. Of course I can just put my checkerboard image inside the set of other images I used for the calibration and run the calib procedure again, but it's very tedious, and moreover, I can't use a checkerboard of different size from the ones used for the instrinsic calibration. Can anybody point me in the right direction?

EDIT: After reading francesco's answer, I realized that I didn't explain what I mean by calibrating the camera. My problem begins with the fact that I don't have the classic intrinsic parameters matrix (so I can't actually use the method Francesco described).In fact I calibrated the fisheye camera with the Scaramuzza's procedure (https://sites.google.com/site/scarabotix/ocamcalib-toolbox), which basically finds a polynom which maps 3d world points into pixel coordinates( or, alternatively, the polynom which backprojects pixels to the unit sphere). Now, I think these information are enough to find the camera pose w.r.t. a chessboard, but I'm not sure exactly how to proceed.


Solution

  • the solvePnP procedure calculates extrinsic pose for Chess Board (CB) in camera coordinates. openCV added a fishEye library to its 3D reconstruction module to accommodate significant distortions in cameras with a large field of view. Of course, if your intrinsic matrix or transformation is not a classical intrinsic matrix you have to modify PnP:

    1. Undo whatever back projection you did
    2. Now you have so-called normalized camera where intrinsic matrix effect was eliminated.

      k*[u,v,1]T = R|T * [x, y, z, 1]T

    The way to solve this is to write the expression for k first:

    k=R20*x+R21*y+R22*z+Tz
    

    then use the above expression in

    k*u = R00*x+R01*y+R02*z+Tx
    k*v = R10*x+R11*y+R12*z+Tx
    

    you can rearrange the terms to get Ax=0, subject to |x|=1, where unknown

    x=[R00, R01, R02, Tx, R10, R11, R12, Ty, R20, R21, R22, Tz]T

    and A, b are composed of known u, v, x, y, z - pixel and CB corner coordinates;

    Then you solve for x=last column of V, where A=ULVT, and assemble rotation and translation matrices from x. Then there are few ‘messy’ steps that are actually very typical for this kind of processing:

    A. Ensure that you got a real rotation matrix - perform orthogonal Procrustes on your R2 = UVT, where R=ULVT

    B. Calculate scale factor scl=sum(R2(i,j)/R(i,j))/9;

    C. Update translation vector T2=scl*T and check for Tz>0; if it is negative invert T and negate R;

    Now, R2, T2 give you a good starting point for non linear algorithm optimization such as Levenberg Marquardt. It is required because a previous linear step optimizes only an algebraic error of parameters while non-linear one optimizes a correct metrics such as squared error in pixel distances. However, if you don’t want to follow all these steps you can take advantage of the fish-eye library of openCV.