opencvhomographyimage-stitchingopencv-stitching

OpenCV findHomography and WarpPerspective not producing good results


I'm trying to use FindHomography to find the warp matrix between two cameras then stitch the images together using warpPerspective. However, the image-to-be-warped overextends and flips to the other side of the screen. Below is some simplified code to show the weird behavior:

vector<Point2f> obj, scene, objCorners, TransformedObjCorners;

scene.push_back(Point2f(324,21));
scene.push_back(Point2f(388,4));
scene.push_back(Point2f(392,110));
scene.push_back(Point2f(322,111));
obj.push_back(Point2f(21,18));
obj.push_back(Point2f(79,45));
obj.push_back(Point2f(76,128));
obj.push_back(Point2f(13,118));
objCorners.push_back(Point2f(0,0));
objCorners.push_back(Point2f(400,0));
objCorners.push_back(Point2f(400,300));
objCorners.push_back(Point2f(0,300));

cv::Mat H = findHomography(obj, scene);

perspectiveTransform(objCorners, TransformedObjCorners, H);
cout << "Transformed object corners are :" << endl;
cout << TransformedObjCorners << endl;

and my output is:

Transformed object corners are :
  [309.14066, 18.626106;
  -2.5252595, 298.53754;
   31.930698, 9.6980038;
   319.43829, 279.87805]

The coordinates are of the black box:

enter image description here

enter image description here

And you can see it abnormally warped here because of the negative coords:

enter image description here

I have been spending hours trying to track the problem. Any help/pointers in the write direction will be very helpful. Thanks

How to stitch the left image? if I'm stitching three images together, what will be the best approach? So I'm trying the left and the middle, and below is my result but is very weak:

enter image description here


Solution

  • I used your code, adapted the point locations (because your images have a title-bar) and warped one of the images.

    these are the input images with point locations:

    enter image description here

    enter image description here

    this is the code

        int main()
    {
        cv::Mat input1 = cv::imread("../inputData/panoA.png");
        cv::Mat input2 = cv::imread("../inputData/panoB.png");
    
        cv::Mat result;
    
    
        std::vector<cv::Point2f> obj, scene, objCorners, transformedObjCorners;
    
        std::vector<cv::Point2f> transObj, transScene;
    
    
        // had to adjust your coordinates since you provided images with title-bar
        scene.push_back(cv::Point2f(313,47));
        scene.push_back(cv::Point2f(379,21));
        scene.push_back(cv::Point2f(385,131));
        scene.push_back(cv::Point2f(317,136));
        obj.push_back(cv::Point2f(9,41));
        obj.push_back(cv::Point2f(70,61));
        obj.push_back(cv::Point2f(69,149));
        obj.push_back(cv::Point2f(7,145));
        objCorners.push_back(cv::Point2f(0,0));
        objCorners.push_back(cv::Point2f(input2.cols,0));
        objCorners.push_back(cv::Point2f(input2.cols,input2.rows));
        objCorners.push_back(cv::Point2f(0,input2.rows));
    
        cv::Mat H = findHomography(obj, scene);
    
        for(unsigned int i=0; i<scene.size(); ++i)
        {
            cv::circle(input1, scene[i], 5, cv::Scalar(0,255,0));
        }
    
        for(unsigned int i=0; i<obj.size(); ++i)
        {
            cv::circle(input2, obj[i], 5, cv::Scalar(0,255,0));
        }
    
    
    
        cv::Mat result1;
        cv::warpPerspective(input2, result1, H, cv::Size(input1.cols*2, input1.rows));
    
        cv::Mat result2 = cv::Mat(result1.size(), CV_8UC3, cv::Scalar(0,0,0));
        input1.copyTo(result2(cv::Rect(0,0,input1.cols, input1.rows)));
    
        result = result1.clone();
    
        // primitive blending, non-optimized
        for(int j=0; j<result1.rows; ++j)
            for(int i=0; i<result1.cols; ++i)
            {
                cv::Vec3b c1(0,0,0);
                cv::Vec3b c2(0,0,0);
    
                if(j < result1.rows && i<result1.cols) c1  = result1.at<cv::Vec3b>(j,i);
                if(j < result2.rows && i<result2.cols) c2  = result2.at<cv::Vec3b>(j,i);
    
                bool c1_0 = false;
                bool c2_0 = false;
    
                if(c1 == cv::Vec3b(0,0,0)) c1_0 = true;
                if(c2 == cv::Vec3b(0,0,0)) c2_0 = true;
    
                cv::Vec3b color(0,0,0);
    
                if(!c1_0 && !c2_0)
                {
                    // both nonzero: use mean value:
                    color = 0.5*(c1+c2);
                }
                if(c1_0)
                {
                    // c1 zero => use c2
                    color = c2;
                }
                if(c2_0)
                {
                    // c1 zero => use c2
                    color = c1;
                }
    
                result.at<cv::Vec3b>(j,i) = color;
    
            }
    
    
        cv::imshow("input1", input1);
        cv::imshow("input2", input2);
        cv::imshow("result", result);
        cv::imwrite("../outputData/panoResult1.png", input1);
        cv::imwrite("../outputData/panoResult2.png", input2);
        cv::imwrite("../outputData/panoResult.png", result);
        cv::waitKey(0);
        return 0;
    }
    

    and this is the result with a primitive blending:

    enter image description here

    distortions come from mapping a 3D world to a 2D plane and from lens distortion. Additionally, your camera movement probably doesnt give you a perfect homography relationship between both images (only legal for planes or pure camera rotation around camera center)