pythonopencvimage-processingcomputer-visionedge-detection

How to detect edges in image using python


I'm having problems trying to detect edges in images corresponding to holes in a glass sample. Images look like this:

images

Single sample:

enter image description here

Each image contains part of a hole that was cut into a glass sample. Inspecting the images with my eyes, I can clearly see two regions, the glass and the hole. Sadly, all methods of trying to properly detect the edge haven't led to good results. The main reason why my attempts have failed, is the insufficient contrast between the glass and the hole, I believe. The hole is not cut through the entire thickness of the glass, leaving a glass bottom in the hole scattering light back into the camera thus making the contrast worse for me.

Image processing things I've already tried:

When taking the images I'm using an industrial camera with a ring light made of LEDs. The ring light can only be turned on or off, I can't adjust directions of the light or brightness. Taking images with various exposure times and analogue gains hasn't yielded much, since the contrast would stay the same throughout the measurements.

Does anyone have an idea what steps I could take in order to properly detect the edges in the images? Be it image processing, programming or tips on how to take better pictures, any idea is appreciated!

Here's an excerpt of my script:

import cv2
image = cv2.imread(
    r'path/to/images')
blurred = cv2.GaussianBlur(image, (7, 7), 9)
gray = cv2.cvtColor(blurred, cv2.COLOR_BGR2GRAY)
thresh = cv2.adaptiveThreshold(gray, 255,
                            cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 71, 5)
edges = cv2.Canny(thresh, 100, 200)

contours, _ = cv2.findContours(
    edges, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

output_image = image.copy()
for contour in contours:
    cv2.drawContours(output_image, [contour], -1, (0, 255, 0), 2)
cv2.imshow('Circle Edge', cv2.resize(output_image, (1000, 1000)))
cv2.waitKey(0)
cv2.destroyAllWindows()

with the detected edges painted in green on the original image (found in the images in this post)

Thanks in advance!

mentioned in the previous text already various image processing / manipulation steps taken


Solution

  • It looks to me that the variance/uniformity is significantly different in the two regions of the image, so maybe consider calculating the variance/standard-deviation within each 25x25 pixel block and normalising the result.

    I am doing it with ImageMagick here, because I am quicker with that, but you can do the same with OpenCV:

    magick YOURIMAGE.bmp -statistic standarddeviation 25x25 -normalize result.png
    

    enter image description here


    If you then flood fill the homogenous area, with some tolerance, you'll get:

    enter image description here