python-3.xopencvimage-processingmatrix2d

Calculate transformation based on anchor in image opencv2 / py


I would like to calculate transformation matrix (rotation, scaling and translation) according to an anchor in an image.

My image is a picture of a label, which will always contains a datamatrix. I use a third-party library to detect datamatrix. Then, I get its size, orientation (using the result of cv2.minAreaRect(dm_contour)), and position. I build what I call my "anchor" with those parameters.

In a second step I get what I call a job, which is composed of ROIs defined by user and the anchor of the picture on which the user defined the ROI.

With these few steps I can correctly place my ROIs according to new label context if it has only a translfation (shifted to left, right, top, bottom).

But as soon as I try to replace ROIs on a rotated label, it doesn't work.

If think my issue is with my rotation matrix and the whole "translate to origen and back to position" process. But I can't find what I m doing wrong...

My code to transform ROIs position looks like that :

def process_job(anchor, img, job, file_path):
    """
    Process job file on current picture
    @param anchor = Current scene anchor
    @param img = Current picture
    @param job = Job object
    @param file_path = Job file path
    """
    print("Processing job " + file_path)
    """ Unpack detected anchor """
    a_x, a_y = (anchor[0], anchor[1])
    rotation = anchor[2]
    anchor_size = int(anchor[3])

    for item_i in job:
        item = job[item_i]
        if 'anchor' in item:
            """ Apply size rate """
            size_rate = anchor_size / int(item['anchor']['size'])
            """" Item anchor pos """
            i_a_x, i_a_y = int(item['anchor']['x']), int(item['anchor']['y'])
            """ Calculate transformation """
            """ Scaling """
            S = np.array([
                            [size_rate, 0, 0],
                            [ 0, size_rate, 0],
                            [ 0, 0, 1]
                        ])

            """ Rotation """
            angle = rotation - int(item['anchor']['o'])
            theta = np.radians(angle)
            c, s = np.cos(theta), np.sin(theta)

            R = np.array((
                        (c, s, 0),
                        (-s, c, 0),
                        (0, 0, 1)
                        ))

            """ Translation """
            x_scale = a_x - i_a_x
            y_scale = a_y - i_a_y

            T = np.array([
                            [1, 0, x_scale],
                            [0,  1, y_scale],
                            [0, 0, 1]
                        ])

            """ Shear """
            shx_factor = 0
            Shx = np.array([
                            [1, shx_factor, 0],
                            [0, 1, 0],
                            [0, 0, 1]
                        ])

            shy_factor = 0
            Shy = np.array([
                            [1,0, 0],
                            [shy_factor, 1, 0],
                            [0, 0, 1]
                        ])

            print("Scaling: " + str(size_rate) + " Rotation:" + str(angle) + " Translation:" + str((x_scale, y_scale)))
            if 'rect' in item:
                """ Unpack rectangle """
                """ (r_x1, r_y1) top-left corner """
                """ (r_x2, r_y2) bottom right corner """
                r_x1, r_y1, r_x2, r_y2 = (int(item['rect']['x1']), int(item['rect']['y1']), int(item['rect']['x2']), int(item['rect']['y2']))

                """ As np arrays """
                rect_1 = np.array([r_x1, r_y1, 1])
                rect_2 = np.array([r_x2, r_y2, 1])

                """ Translate to origen """
                T_c_1 = np.array([
                                [1, 0, -r_x1],
                                [0,  1, -r_y1],
                                [0, 0, 1]
                            ])
                """ Translate to origen """
                T_c_2 = np.array([
                                [1, 0, -r_x2],
                                [0,  1, -r_y2],
                                [0, 0, 1]
                            ])

                """ Back to postion """
                T_r1 = np.array([
                                [1, 0, r_x1],
                                [0,  1, r_y1],
                                [0, 0, 1]
                            ])

                """ Back to postion """
                T_r2 = np.array([
                                [1, 0, r_x2],
                                [0,  1, r_y2],
                                [0, 0, 1]
                            ])
                """ Apply transformations """
                final_1 =  T @ T_r1 @ R @ T_c_1 @ S @ rect_1
                final_2 =  T @ T_r2 @ R @ T_c_2 @ S @ rect_2
                x1, y1, x2, y2 = final_1[0], final_1[1], final_2[0], final_2[1]

                print("From " + str((r_x1, r_y1, r_x2, r_y2)))
                print("To " + str((int(x1), int(y1), int(x2), int(y2))))

                cv2.line(img, (int(x1), int(y1)), (int(x2), int(y2)), \
                            (0,0,0), 2)

    cv2.imwrite('./output/job.png', img)

And here a fex sample of my images :

Original, user defined ROI

First sample, shifted, correct detection

Second sample, rotated, misplaced ROI (out of picture)

Thanks in advance for your help,


Solution

  • So,

    I don't even know if someone took the time to read my question, but if it can be of any help, here is what I did.


    In my first code version, I tried to calculate the following transformation matrix:

    But was missing two of them:

    My first second version looked like roi_pos = ShX @ ShY @ S @ T @ T_to_pos @ R @ T_to_origin @ item_roi

    Results were very clumsy and the ROI I difined with my model were not correctly located on my test samples. But rotation was right and somehow ROIs would fall near the expected results.

    Then I thought about optimizing my Datamatrix detection, so I went throught all the trouble to implement my own python/numpy/openCV version of a DM detection algorithm. A sharped DM detection helped me evaluate better my orientation and scale parameter but ROIs were still off.

    So I discovered homography, which exactly do what I want. Its takes points in a known plan and same points in a destination plan. It then calculate the transformation that occured between the two plans.

    With this matrix 'H', I know can do roi_pos = H @ item_roi which is much more accurate.

    That's it, hope it helps,