pythonopencverror-handlingopenpose

error: (-215:Assertion failed) total >= 0 && (depth == CV_32F || depth == CV_32S) in function 'cv::convexHull'


I am trying to get the pixel coordinate values for the point detected using the open pose. Can someone tell me is this the correct way to identify the pixel coordinates or is there any other particular way to get the pixel coordinates represented as 2 and 5 in the below image?

enter image description here

code:

for pair in POSE_PAIRS:
    partA = pair[0]
    partB = pair[1]
    print("{}".format(partA),"{}".format(partB))

    if partA == 2 and partB == 5:
        print("heere")
        cv2.line(frame, points[partA], points[partB], (0, 0, 0), 2)
        cv2.circle(frame, points[partA], 8, (0, 0, 255), thickness=-1, lineType=cv2.FILLED)
    else :
        cv2.line(frame, points[partA], points[partB], (0, 255, 255), 2)
        cv2.circle(frame, points[partA], 8, (0, 0, 255), thickness=-1, lineType=cv2.FILLED)

rc = cv2.minAreaRect(partA)
box = cv2.boxPoints(rc)
for p in box:
    pt = (p[0],p[1])
    print (pt)

error :

Traceback (most recent call last): File "OpenPoseImage.py", line 92, in rc = cv2.minAreaRect(partA) cv2.error: OpenCV(4.1.0) C:\projects\opencv-python\opencv\modules\imgproc\src\convhull.cpp:137: error: (-215:Assertion failed) total >= 0 && (depth == CV_32F || depth == CV_32S) in function 'cv::convexHull'


Solution

  • If you just want to get the pixel coordinate values for the point detected using the open pose, i.e., white spot in the image, then you can use the below code:

    import cv2
    import numpy as np 
    
    # read and scale down image
    img = cv2.pyrDown(cv2.imread('hull.jpg', cv2.IMREAD_UNCHANGED))
    
    # threshold image
    ret, threshed_img = cv2.threshold(cv2.cvtColor(img, cv2.COLOR_BGR2GRAY), 230, 255, cv2.THRESH_BINARY)
    
    # find contours
    contours = cv2.findContours(threshed_img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[0]
    
    for c in contours:
        # get the bounding rect
        x, y, w, h = cv2.boundingRect(c)
    
        # get the min enclosing circle
        (x, y), radius = cv2.minEnclosingCircle(c)
    
        # convert all values to int
        center = (int(x), int(y))
        radius = int(radius)
    
        if radius>2 and radius<4:
            print(center)
            img = cv2.circle(img, center, radius, (255, 0, 0), 2)
            cv2.putText(img,'({},{})'.format(int(x), int(y)), (int(x)+5, int(y)+5), cv2.FONT_HERSHEY_SIMPLEX, 0.3,(0,255,0), 1)
    
    cv2.imshow('contours', img)
    cv2.waitKey(0)
    cv2.destroyAllWindows()
    

    output:

    (208, 418)
    (180, 410)
    (160, 408)
    (208, 326)
    (152, 316)
    (159, 234)
    (200, 234)
    (136, 224)
    (224, 224)
    (232, 163)
    (184, 163)
    (128, 163)
    (200, 112)
    (232, 91)
    (136, 91)
    (176, 61)
    (176, 0)
    

    enter image description here

    In the above code, only those pixels are detected whose enclosing circle radius is greater than 2 and less than 4.