I am pretty new to OpenCV and am trying to achieve drawing simple contours along the outline of my hand using a webcam. I decided on using cv2.adaptiveThreshold()
to deal with the different light intensities when the camera adjusts to the hand moving. Everything seems to work fine except that it is struggling with finding the fingers and then also drawing closed contours.
See here:
I thought about trying to detect a convex hull and detect anything deviating from it somehow.
How do I go about this best? Firstly I need to manage to maybe not find weird closed contours and then go from there?
Here's the code, I fixed the trackbar values for you :)
import cv2
import numpy as np
#####################################
winWidth = 640
winHeight = 840
brightness = 100
cap = cv2.VideoCapture(0)
cap.set(3, winWidth)
cap.set(4, winHeight)
cap.set(10, brightness)
kernel = (7, 7)
#######################################################################
def empty(a):
pass
cv2.namedWindow("TrackBars")
cv2.resizeWindow("TrackBars", 640, 240)
cv2.createTrackbar("cVal", "TrackBars", 10, 40, empty)
cv2.createTrackbar("bSize", "TrackBars", 77, 154, empty)
def preprocessing(frame, value_BSize, cVal):
imgGray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# mask = cv2.inRange(imgHsv, lower, upper)
imgBlurred = cv2.GaussianBlur(imgGray, kernel, 4)
gaussC = cv2.adaptiveThreshold(imgBlurred, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, value_BSize,
cVal)
imgDial = cv2.dilate(gaussC, kernel, iterations=3)
imgErode = cv2.erode(imgDial, kernel, iterations=1)
return imgDial
def getContours(imPrePro):
contours, hierarchy = cv2.findContours(imPrePro, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
for cnt in contours:
area = cv2.contourArea(cnt)
if area > 60:
cv2.drawContours(imgCon, cnt, -1, (0, 255, 0), 2, cv2.FONT_HERSHEY_SIMPLEX)
peri = cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, 0.02 * peri, True)
#######################################################################################################
while cap.isOpened():
success, frame = cap.read()
cVal = cv2.getTrackbarPos("cVal", "TrackBars")
bVal = cv2.getTrackbarPos("bVal", "TrackBars")
value_BSize = cv2.getTrackbarPos("bSize", "TrackBars")
value_BSize = max(3, value_BSize)
if (value_BSize % 2 == 0):
value_BSize += 1
if success == True:
frame = cv2.flip(frame, 1)
imgCon = frame.copy()
imPrePro = preprocessing(frame, value_BSize, cVal)
getContours(imPrePro)
cv2.imshow("Preprocessed", imPrePro)
cv2.imshow("Original", imgCon)
if cv2.waitKey(1) & 0xFF == ord("q"):
cv2.destroyAllWindows()
break
L*a*b color space can help find objects brighter than the background. One advantage is that color space is hardware independent, so it should yield relatively similar results from any camera. Using the OTSU option to threshold the image can help it work in different lightning conditions, as it calculates the optimal threshold intensity to separate bright and dark areas in the image. Obviously it is not a silver bullet and will NOT work perfectly in every situation, especially in extreme cases, but as long your hand's brightness is relatively different from the background, it should work.
lab = cv2.cvtColor(frame, cv2.COLOR_BGR2LAB)
tv, thresh = cv2.threshold(lab[:,:,0], 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
plt.imshow(thresh)
Once the hand is properly thresholded, you can proceed to find the contours and do your analysis as needed.
Note: the artifacts in the threholded image are caused by removing the green contour lines from the original posted image.