pythonopencvbarcodebarcode-scannerzbar

How to reliably detect a barcode's 4 corners?


I'm trying to detect this Code128 barcode with Python + zbar module:

(Image download link here).

This works:

import cv2, numpy
import zbar
from PIL import Image 
import matplotlib.pyplot as plt

scanner = zbar.ImageScanner()
pil = Image.open("000.jpg").convert('L')
width, height = pil.size    
plt.imshow(pil); plt.show()
image = zbar.Image(width, height, 'Y800', pil.tobytes())
result = scanner.scan(image)

for symbol in image:
    print symbol.data, symbol.type, symbol.quality, symbol.location, symbol.count, symbol.orientation

but only one point is detected: (596, 210).

If I apply a black and white thresholding:

pil = Image.open("000.jpg").convert('L')
pil = pil .point(lambda x: 0 if x<100 else 255, '1').convert('L')    

it's better, and we have 3 points: (596, 210), (482, 211), (596, 212). But it adds one more difficulty (finding the optimal threshold - here 100 - automatically for every new image).

Still, we don't have the 4 corners of the barcode.

Question: how to reliably find the 4 corners of a barcode on an image, with Python? (and maybe OpenCV, or another library?)

Notes:


Solution

  • Solution 2 is pretty good. The critical factor that made it fail on your image was the thresholding. If you drop the parameter 225 way down to 55, you'll get much better results.

    I've reworked the code, making some tweaks here and there. The original code is fine if you prefer. The documentation for OpenCV is quite good, and there are very good Python tutorials.

    import numpy as np
    import cv2
    
    image = cv2.imread("barcode.jpg")
    gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
    
    # equalize lighting
    clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
    gray = clahe.apply(gray)
    
    # edge enhancement
    edge_enh = cv2.Laplacian(gray, ddepth = cv2.CV_8U, 
                             ksize = 3, scale = 1, delta = 0)
    cv2.imshow("Edges", edge_enh)
    cv2.waitKey(0)
    retval = cv2.imwrite("edge_enh.jpg", edge_enh)
    
    # bilateral blur, which keeps edges
    blurred = cv2.bilateralFilter(edge_enh, 13, 50, 50)
    
    # use simple thresholding. adaptive thresholding might be more robust
    (_, thresh) = cv2.threshold(blurred, 55, 255, cv2.THRESH_BINARY)
    cv2.imshow("Thresholded", thresh)
    cv2.waitKey(0)
    retval = cv2.imwrite("thresh.jpg", thresh)
    
    # do some morphology to isolate just the barcode blob
    kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (9, 9))
    closed = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
    closed = cv2.erode(closed, None, iterations = 4)
    closed = cv2.dilate(closed, None, iterations = 4)
    cv2.imshow("After morphology", closed)
    cv2.waitKey(0)
    retval = cv2.imwrite("closed.jpg", closed)
    
    # find contours left in the image
    (_, cnts, _) = cv2.findContours(closed.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    c = sorted(cnts, key = cv2.contourArea, reverse = True)[0]
    rect = cv2.minAreaRect(c)
    box = np.int0(cv2.boxPoints(rect))
    cv2.drawContours(image, [box], -1, (0, 255, 0), 3)
    print(box)
    cv2.imshow("found barcode", image)
    cv2.waitKey(0)
    retval = cv2.imwrite("found.jpg", image)
    

    edge.jpg enter image description here

    thresh.jpg enter image description here

    closed.jpg enter image description here

    found.jpgenter image description here

    output from console:

    [[596 249]
     [470 213]
     [482 172]
     [608 209]]