imageopencvhomographyflannflannbasedmatcher

How do I use the relationships between Flann matches to determine a sensible homography?


I have a panorama image, and a smaller image of buildings seen within that panorama image. What I want to do is recognise if the buildings in that smaller image are in that panorama image, and how the 2 images line up.

For this first example, I'm using a cropped version of my panorama image, so the pixels are identical.

import cv2
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import math

# Load images
cwImage = cv2.imread('cw1.jpg',0)
panImage = cv2.imread('pan1.jpg',0)

# Prepare for SURF image analysis
surf = cv2.xfeatures2d.SURF_create(4000)

# Find keypoints and point descriptors for both images
cwKeypoints, cwDescriptors = surf.detectAndCompute(cwImage, None)
panKeypoints, panDescriptors = surf.detectAndCompute(panImage, None)

enter image description here

enter image description here

Then I use OpenCV's FlannBasedMatcher to find good matches between the two images:

FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)
search_params = dict(checks=50)
flann = cv2.FlannBasedMatcher(index_params, search_params)

# Find matches between the descriptors
matches = flann.knnMatch(cwDescriptors, panDescriptors, k=2)

good = []

for m, n in matches:
  if m.distance < 0.7 * n.distance:
    good.append(m)

enter image description here

So you can see that in this example, it perfectly matches the points between images. So then I find the homography, and apply a perspective warp:

cwPoints = np.float32([cwKeypoints[m.queryIdx].pt for m in good
                          ]).reshape(-1, 1, 2)
panPoints = np.float32([panKeypoints[m.trainIdx].pt for m in good
                          ]).reshape(-1, 1, 2)
h, status = cv2.findHomography(cwPoints, panPoints)

warpImage = cv2.warpPerspective(cwImage, h, (panImage.shape[1], panImage.shape[0]))

enter image description here

Result is that it perfectly places the smaller image within the larger image.

Now, I want to do this where the smaller image isn't a pixel-perfect version of the larger image.

For the new smaller image, the keypoints look like this:

enter image description here

You can see that in some cases, it matches correctly, and in some cases it doesn't.

If I call findHomography with these matches, it's going to take all of these data points into account and come up with a non-sensical warp perspective, because it's basing it on the correct matches and the incorrect matches.

enter image description here

What I'm looking for is a missing step in between detecting the good matches, and calling findHomography, where I can look at the relationship between the matches, and determine which matches are therefore correct.

I'm wondering if there's a function within OpenCV that I should be looking at for this step, or if this is something I'll need to work out on my own, and if so how I should go about doing that?


Solution

  • I wrote a blog in about finding object in scene last year( 2017.11.11). Maybe it helps. Here is the link. https://zhuanlan.zhihu.com/p/30936804

    Env: OpenCV 3.3 + Python 3.5


    Found matches:

    enter image description here

    The found object in the scene:

    enter image description here


    The code:

    #!/usr/bin/python3
    # 2017.11.11 01:44:37 CST
    # 2017.11.12 00:09:14 CST
    """
    使用Sift特征点检测和匹配查找场景中特定物体。
    """
    
    import cv2
    import numpy as np
    MIN_MATCH_COUNT = 4
    
    imgname1 = "box.png"
    imgname2 = "box_in_scene.png"
    
    ## (1) prepare data
    img1 = cv2.imread(imgname1)
    img2 = cv2.imread(imgname2)
    gray1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
    gray2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
    
    
    ## (2) Create SIFT object
    sift = cv2.xfeatures2d.SIFT_create()
    
    ## (3) Create flann matcher
    matcher = cv2.FlannBasedMatcher(dict(algorithm = 1, trees = 5), {})
    
    ## (4) Detect keypoints and compute keypointer descriptors
    kpts1, descs1 = sift.detectAndCompute(gray1,None)
    kpts2, descs2 = sift.detectAndCompute(gray2,None)
    
    ## (5) knnMatch to get Top2
    matches = matcher.knnMatch(descs1, descs2, 2)
    # Sort by their distance.
    matches = sorted(matches, key = lambda x:x[0].distance)
    
    ## (6) Ratio test, to get good matches.
    good = [m1 for (m1, m2) in matches if m1.distance < 0.7 * m2.distance]
    
    canvas = img2.copy()
    
    ## (7) find homography matrix
    ## 当有足够的健壮匹配点对(至少4个)时
    if len(good)>MIN_MATCH_COUNT:
        ## 从匹配中提取出对应点对
        ## (queryIndex for the small object, trainIndex for the scene )
        src_pts = np.float32([ kpts1[m.queryIdx].pt for m in good ]).reshape(-1,1,2)
        dst_pts = np.float32([ kpts2[m.trainIdx].pt for m in good ]).reshape(-1,1,2)
        ## find homography matrix in cv2.RANSAC using good match points
        M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC,5.0)
        ## 掩模,用作绘制计算单应性矩阵时用到的点对
        #matchesMask2 = mask.ravel().tolist()
        ## 计算图1的畸变,也就是在图2中的对应的位置。
        h,w = img1.shape[:2]
        pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
        dst = cv2.perspectiveTransform(pts,M)
        ## 绘制边框
        cv2.polylines(canvas,[np.int32(dst)],True,(0,255,0),3, cv2.LINE_AA)
    else:
        print( "Not enough matches are found - {}/{}".format(len(good),MIN_MATCH_COUNT))
    
    
    ## (8) drawMatches
    matched = cv2.drawMatches(img1,kpts1,canvas,kpts2,good,None)#,**draw_params)
    
    ## (9) Crop the matched region from scene
    h,w = img1.shape[:2]
    pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
    dst = cv2.perspectiveTransform(pts,M)
    perspectiveM = cv2.getPerspectiveTransform(np.float32(dst),pts)
    found = cv2.warpPerspective(img2,perspectiveM,(w,h))
    
    ## (10) save and display
    cv2.imwrite("matched.png", matched)
    cv2.imwrite("found.png", found)
    cv2.imshow("matched", matched);
    cv2.imshow("found", found);
    cv2.waitKey();cv2.destroyAllWindows()