Quite new to OPENCV and Image Processing.
I'm trying to use OPENCV to detect/draw bounding box's around each section.
When thresholding and dilating the images box's get removed and only the text remains causing the findContours to only find the text. I've attempted increasing the MORPH RECT in the kernel to combine the text together but has unfavorable results.
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (7,7), 0)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5, 10))
dilate = cv2.dilate(thresh, kernel, iterations=1)
cnts = cv2.findContours(dilate, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cnts = sorted(cnts, key=lambda x: cv2.boundingRect(x)[1])
for c in cnts:
x, y, w, h = cv2.boundingRect(c)
roi = image[y:y+h, x:x+w]
cv2.rectangle(image, (x, y), (x+w, y+h), (36,255,12), 2)
I am not sure how to manipulate the image or the contours to achieve my desired result:
Read the image and remove the alpha channel. Then invert the image. Then threshold inverted at about 90% of 255 = 230. If needed, erode a small amount to make sure the white rectangles have continuous black borders. Then get external contours. (findContours only works with white regions on black background)
Does this help?