Right now I have two cameras and finish the stereo calibration step for grabbing intrinsic and extrinsic parameters. I extract few feature points in left camera and wondering any way to map them on the right image. Anyone can help please?
I tried to compute them by rotation and translation matrix of each camera like this.
With SolvePnP
function in openCV
, I got R1 T1 for left camera and R2 T2 for right camera.
To the same world point [X,Y,Z], I can get
[x1,y1,z1] = R1[X,Y,Z]+T1
and
[x2,y2,z2] = R2[X,Y,Z]+T2
which the camera coordinate in their own coordinate system.
When I try to map [x1,y1,z1] and [x2,y2,z2] by
R2*inv(R1)[x1,x2,x3]+(T2-R2*inv(R1)*T1) = [x2,y2,z2]
I got this result.
the image on left is result of mapping chessboard corner from left image by method above; image on right is corners of right image computing by findChessBoardCorner:
Rotation and translation matrices maps 3D coordinates in world coordinate system to cameras local 3D coordinates In order to find 2D-2D transformation i suggest you to use homography: