pythonopencvlineantialiasinggaussianblur

Why the Line (antialiased) function of openCV2 gives different results on CV_16UC1 and CV_8UC1 without overflow


themap = cv.CreateMat(8,8,cv.CV_8UC1)
cv.SetZero(themap)
cv.Line(themap,(0,0),(7,7),(10),1,cv.CV_AA) #draw a line
print np.asarray(themap[:,:])

#######output
[[ 0  0  0  0  0  0  0  0]
 [ 0  0  0  0  0  0  0  0]
 [ 0  2  7  2  0  0  0  0]
 [ 0  0  3 10  3  0  0  0]
 [ 0  0  0  3 10  3  0  0]
 [ 0  0  0  0  2  7  2  0]
 [ 0  0  0  0  0  0  0  0]
 [ 0  0  0  0  0  0  0  0]]

However, when I change the image type to cv.CV_16UC1, the result is totally different:

themap = cv.CreateMat(8,8,cv.CV_16UC1)
cv.SetZero(themap)
cv.Line(themap,(0,0),(7,7),(10),1,cv.CV_AA) #draw a line
print np.asarray(themap[:,:])

#######output
[[10  0  0  0  0  0  0  0]
 [ 0 10  0  0  0  0  0  0]
 [ 0  0 10  0  0  0  0  0]
 [ 0  0  0 10  0  0  0  0]
 [ 0  0  0  0 10  0  0  0]
 [ 0  0  0  0  0 10  0  0]
 [ 0  0  0  0  0  0 10  0]
 [ 0  0  0  0  0  0  0 10]]

I'm totally confused by such results. Since the pixel value is only 10, there should not be overflow problems for cv.CV_8UC1 image type. Why the results of cv.Line can be so different?

The opencv in installed in /usr/local/Cellar/opencv@2/2.4.13.6_2/lib/python2.7/site-packages/cv.py with brew install opencv@2.


Solution

  • Don't use the old, deprecated OpenCV API. Use the cv2 module instead and create your images with Numpy directly. With the following code, the result is as expected and is the same for both 8-bit and 16-bit images.

    themap = np.zeros((8,8), dtype=np.uint16)                                                                                                                                           
    cv2.line(themap, (0,0), (7,7), (10), lineType=cv2.LINE_AA)                                                                                                                                                 
    print themap