pythonopencvimage-processingobject-detectionomr

Hough Gradient Method misses some circles


Consider the images below:

enter image description here

enter image description here

As can be seen, many circles are not detected. I have already played with

param1 and param2 (cv2.HoughCircles)

in the code below:

import cv2
import numpy as np
from PIL import Image

# Load the image and downscale it
img = Image.open('/tmp/F01-02.jpg')
img = img.resize((img.size[0] // 2, img.size[1] // 2))  # downscale by a factor of 2
open_cv_image = np.array(img) 

# Convert to grayscale
gray = cv2.cvtColor(open_cv_image, cv2.COLOR_RGB2GRAY)

# Apply Hough transform
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1, 40, param1=5, param2=30, minRadius=7, maxRadius=10)

# Ensure at least some circles were found
if circles is not None:
    circles = np.round(circles[0, :]).astype("int")
    for (x, y, r) in circles:
        cv2.circle(open_cv_image, (x, y), r, (0, 255, 0), 2)

pil_image = Image.fromarray(open_cv_image)
pil_image.save('/tmp/result.jpg')

Any ideas? Thanks!


Solution

  • HoughCircles

    I managed to find every circle of interest by avoiding the downscale step at line 7 and using the following arguments for the HoughCircles function

    circles = cv2.HoughCircles(image=gray, method=cv2.HOUGH_GRADIENT, dp=1.7, minDist=10, param1=100, param2=30, minRadius=2, maxRadius=15)
    

    result:

    enter image description here

    I don't know how robust this approach is, it would be necessary to test with a wider range of inputs. Maybe other methods not using HoughCircles might be more robust (e.g. detecting the grid itself and inferring the circle position).

    Template matching

    This appears to be more robust, it uses template matching with a template I produced by hand, i would recommend something like this over the HoughCircles. It seems to work fine even with the downscaling:

    Template:

    enter image description here

    Warning: I filled it with gray as a dirty way to compensate for it being filled or not (kind of detect both).It is a better idea to create a template for each case for using the opencv template matching, something like this:

    enter image description hereenter image description here

    You can adjust the sensitivity with the variable threshold:

    threshold = 0.7
    

    Example:

    import cv2
    import numpy as np
    from PIL import Image
    
    # Load the image and downscale it
    img = Image.open('input.png')
    img = img.resize((img.size[0] // 2, img.size[1] // 2))  # downscale by a factor of 2
    open_cv_image = np.array(img) 
    
    template = Image.open('template.png')
    template = template.resize((template.size[0] // 2, template.size[1] // 2))  # downscale by a factor of 2
    open_cv_template = np.array(template)
    
    # Convert to grayscale
    gray = cv2.cvtColor(open_cv_image, cv2.COLOR_RGB2GRAY)
    grayTemplate = cv2.cvtColor(open_cv_template, cv2.COLOR_RGB2GRAY)
    
    w, h = grayTemplate.shape[::-1]
    
    res = cv2.matchTemplate(gray,grayTemplate,cv2.TM_CCOEFF_NORMED)
    threshold = 0.7
    loc = np.where( res >= threshold)
    for pt in zip(*loc[::-1]):
        cv2.circle(open_cv_image, (int(pt[0] + w/2), int(pt[1] + h/2)), 5, (0, 255, 0), 2)
    
    cv2.imshow("image", res)
    
    cv2.waitKey(0)
    
    cv2.imshow("image", open_cv_image)
    
    cv2.waitKey(0)
    
    pil_image = Image.fromarray(open_cv_image)
    pil_image.save('/tmp/result.jpg')
    
    

    Template matching result:

    enter image description here

    Result:

    enter image description here