I am using Python (3) and OpenCV (3.3) to run live object detection on a webcam, using a sample image which then is feature matches to the video stream. I have got it to work using SIFT/SURF but am trying to use ORB algorithm.
I am receiving the following error in some cases causing the program to crash:
for i, (m, n) in enumerate(matches):
ValueError: not enough values to unpack (expected 2, got 1)
I understand the reasons behind it crashing, sometimes there are good matches between images, and sometimes there aren't, causing a mismatch.
My question is, how do I force the program to ignore and skip the cases where there are not enough values and continue running.
Main area of code in question:
for i, (m, n) in enumerate(matches):
if m.distance < 0.7*n.distance:
good.append(m)
Example 'matches' output:
[[<DMatch 0x11bdcc030>, <DMatch 0x11bbf20b0>], [<DMatch 0x11bbf2490>, <DMatch 0x11bbf24f0>], [<DMatch 0x11bbf2750>, <DMatch 0x11bbf25d0>], [<DMatch 0x11bbf2570>, <DMatch 0x11bbf2150>], etc etc
Full code:
import numpy as np
import cv2
from matplotlib import pyplot as plt
import matplotlib.patches as mpatches
import os, os.path
import math
import time
from datetime import datetime
startTime = datetime.now()
MIN_MATCH_COUNT = 10 # default=10
img1 = cv2.imread('Pattern3_small.jpg',0) # queryImage
# Create ORB object. You can specify params here or later.
orb = cv2.ORB_create()
cap = cv2.VideoCapture(0)
# cap = cv2.VideoCapture("output_H264_30.mov")
# find the keypoints and descriptors with SIFT
kp1, des1 = orb.detectAndCompute(img1,None)
pts_global = []
dst_global = []
position = []
heading = []
# plt.axis([0, 1280, 0, 720])
tbl_upper_horiz = 1539
tbl_lower_horiz = 343
tbl_upper_vert = 1008
tbl_lower_vert = 110
# cv2.namedWindow("Frame", cv2.WINDOW_NORMAL)
# cv2.resizeWindow("Frame", 600,350)
while True:
_, img2 = cap.read()
# Start timer
timer = cv2.getTickCount()
# find the keypoints and descriptors with SIFT
# kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)
FLANN_INDEX_KDTREE = 0
FLANN_INDEX_LSH = 6
# index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
index_params= dict(algorithm = FLANN_INDEX_LSH,
table_number = 6, # 12, 6
key_size = 12, # 20, 12
multi_probe_level = 1) #2, 1
search_params = dict(checks = 50)
flann = cv2.FlannBasedMatcher(index_params, search_params)
matches = flann.knnMatch(des1,des2,k=2)
# print (matches)
# Calculate Frames per second (FPS)
fps = cv2.getTickFrequency() / (cv2.getTickCount() - timer);
# store all the good matches as per Lowe's ratio test.
good = []
# ratio test as per Lowe's paper
for i, (m, n) in enumerate(matches):
if m.distance < 0.7*n.distance:
good.append(m)
# Do something afterwards
Thanks for any help.
Treat each element of matches
as a collection and use exception handling:
for i, pair in enumerate(matches):
try:
m, n = pair
if m.distance < 0.7*n.distance:
good.append(m)
except ValueError:
pass