pythonopencvimage-processingthresholdimage-thresholding

How to fix the colors of the image to get a better thresholding result?


I'm trying to threshold the hand in the following image original image using the following code:


    img = cv2.GaussianBlur(crop_img,(25,25),0)
    img_YCrCb = cv2.cvtColor(img, cv2.COLOR_BGR2YCrCb)
    #skin color range for hsv color space 
    YCrCb_mask = cv2.inRange(img_YCrCb, (0, 135, 85), (255,180,135)) 
    YCrCb_mask = cv2.morphologyEx(YCrCb_mask, cv2.MORPH_CLOSE, np.ones((3,3), np.uint8))
    thresh = cv2.morphologyEx(YCrCb_mask, cv2.MORPH_OPEN, np.ones((3,3), np.uint8))
    #morphological operation to close -in the vertical direction- the gaps due to accessories 
    morph = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (11, 31)))


which works well on the rest of the test cases but on the above image it gives the following result: thresholded image

How can I better threshold the image to threshold all the fingers? or how to check for uneven illumination and fix it?


Solution

  • It seems your approach fails where color is close to black, as it cannot reliably extract hue from black. I noticed that in those regions using edge detectors works better. In that case I will try to use adaptiveThreshold. It is not an edge detector per se, but it works like one and is resistant to varying luminosity.

    Blur first, get a grayscale of blur and adapt-threshold it:

    blur = cv2.GaussianBlur(img, (25, 25), 0)
    blur_gray = cv2.cvtColor(blur, cv2.COLOR_BGR2GRAY)
    blur_adaptive = cv2.adaptiveThreshold(blur_gray, 255, 
        cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 51, 0)
    

    Adapt-thresholded blur

    As you can see it does show where the edge is. You may have to use large enough kernel to separate reliably (51 in this case). The only problem is, it is sensitive to noise in open spaces.

    To remove this noise we can flood-fill from the edge. That is also why we blurred before adapt-thresholding, so that this noise is more or less connected.

    flooded = blur_adaptive.copy()
    for x in range(flooded.shape[1]):
        if flooded[0, x] == 0:
            break
    cv2.floodFill(flooded, None, (x, 0), 255)
    # invert flooded and make it bool
    flooded = flooded < 255
    

    Noise removed by flood

    There is still some noise, but we can deal with it later. Now let's merge your mask with what we got.

    img_YCrCb = cv2.cvtColor(blur, cv2.COLOR_BGR2YCrCb)
    #skin color range for hsv color space 
    YCrCb_mask = cv2.inRange(img_YCrCb, (0, 135, 85), (255,180,135))
    # make YCrCb_mask bool
    YCrCb_mask = YCrCb_mask > 0
    

    This is your mask

    combined = np.bitwise_or(YCrCb_mask, flooded)
    

    Combined masks

    Not bad. Masks compensate each other's weaknesses. We can clear the residual noise by eliminating small contours. It will also fill enclosed lacunas.

    contours, _ = cv2.findContours(combined.astype(np.uint8), 
        cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    area_threshold = 200
    large_contours = [cnt for cnt in contours if cv2.contourArea(cnt) > area_threshold]
    cleaned = np.zeros(combined.shape, dtype=np.uint8)
    cv2.drawContours(cleaned,  large_contours, -1, 255, thickness=cv2.FILLED)
    

    Residual noise removed

    And morph-close to connect pieces, if there are any:

    morph = cv2.morphologyEx(cleaned, cv2.MORPH_CLOSE, 
        cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (11, 31)))
    

    Morph-close

    There is still some work to be done here, but it looks better than before.

    Warning! Adaptive-thresholding (with large kernel), morphological closing (with large kernel) and contour detection are expensive operations for large images. There are techniques to reduce the cost (like down-sampling before doing the operation), but they are beyond the scope of this post.

    Edit bonus content: Here is another image that you posted before processed by this pipeline.

    enter image description here