pythonopencvvideovideo-captureopencv-stitching

Combine two overlapping videos frame by frame to form a single frame


I am getting video input from 2 separate cameras with some area of overlap between the output videos. I have tried out a code which combines the video output horizontally. Here is the link for that code:

https://github.com/rajatsaxena/NeuroscienceLab/blob/master/positiontracking/combinevid.py

To explain the problem visually:

Overlap

The red part shows the overlap region between two image frame. I need the output to look like the second image, with first frame in blue and second frame in green (as shown in third illustration)

A solutions I can think of but unable to implement is, Using SIFT/SURF find out the maximum distance keypoints from both frames and then take the first video frame completely and just pick the non overlapping region from second video frame and horizontally combine them to get the stitched output.

Let me know of any other solutions possible as well. Thanks!


Solution

  • I read this post one hour ago. I tried some really easy approach. Not perfect but in some cases should work well. For example, if you have both cameras on one frame placed side by side.

    Overlap regions

    I took 2 images from the phone like on a picture (color images). Program select Rectangles region from both source images and resize end extract this roi rectangles. The idea is to find the "best" overlapping Rect regions by normalized correlation.

    M1 and M2 is mat roi to compare, matchTemplate(M1, M2, res, TM_CCOEFF_NORMED);

    After, I find this overlapping Rect use this to crop source images and combine by hconcat() function together.

    My code is in C++ but is really simple to replicate this in python. It is not the best solution but one of the most simple solution. If your cameras are fixed in stable position between themselves. This is a good solution I think. I hold my phone in hand :)

    You can also use this simple approach on video. The speed depends only on the number of rectangle candidate you compare.

    You can improve this by smart region to compare selection.

    Also, I am thinking about another idea to use optical flow by putting your images from a camera at the same time to sequence behind each other. From the possible overlapping regions in one image extract good features to track and find them in the region of second images.

    Surf and sift are great for this but this is the most simple idea on my mind.

    Code is Here Code