opencvimage-processingcomputer-visionprojective-geometry

How to "translate" the movement of the camera to the image?


I'm doing some work with a camera and video stabilization with OpenCV.

Let's suppose I know exactly (in meters) how much my camera has moved from one frame to another and I want to use this to return the second frame where it should be.

I'm sure I have to do some math with this number before I make the translation matrix, but i'm a little lost with that... Any help?

Thanks.

EDIT:Ok I'll try to explain it better: I want to remove from a video the movement (shaking) of the camera and I know how much the camera has moved (and the direction) from one frame to another. So what I want to do is to move back the second frame where it should be using that information I have. I have to make a traslation matrix for each two frames and apply it to the second frame. But here is when I doubt: As the info I have is en meters and is the movement of the camera, and now I'm working with a image and pixels, I think I have to do some operations so the traslation is correct, but I'm not sure what they are exactly


Solution

  • Knowing how much the camera has moved is not enough for creating a synthesized frame. For that you'll need the 3D model of the world as well, which I assume you don't have.

    To demonstrate that assume the camera movement is pure translation and you are looking at two objects, one is very far - a few kilometers away and the other is very close - a few centimeters away. The very far object will hardly move in the new frame, while the very close one can move dramatically or even disappear from the field of view of the second frame, you need to know how much the viewing angle has changed for each point and for that you need the 3D model.

    Having sensor information may help in the case of rotation but it is not as useful for translations.