pythonopencvcalibration

How to get the 3D position from a 2D with opencv


I'm trying to get the 3D coordinates and orientation of an object from its projection onto an image with a single camera with openCV. I have been reading and the steps to follow are: calibrating the image to get the rotational and translational matrices in addition to the matrix k. I have found many examples with a chessboard --> https://littlecodes.wordpress.com/2013/06/24/calibracion-de-camaras-y-procesamiento-de-imagenes-ii/. But my question is: once these parameters have been obtained, how do I get the position of any other object? I have not found any complete examples, only mathematical explanations that I do not quite understand. Something like this, but using the matrices and being able to get the orientation with some accuracy.

I already have the segmented image, and I can locate the points of the corners.

Something like that is what I like to get: Video

Greetings and thanks.


Solution

  • You can use cv2.solvePnPRansac to get the rotational and translational vectors and then use cv2.projectPoints to project the 3d points to the plane. There's a complete tutorial on this that you can find here.(Internet archive)