pythonopencvcomputer-visionscikit-imagehomography

skimage.transform.warp vs cv2.warpPerspective


This is a topic that has also struck me recently, namely the differences between skimage geometric transforms and their equivalences in opencv. My goal is to replace the skimage.transform.warp function in the following example. Just from the basic look of it, they work similarly and borderMode or borderValue are also the same by default, yet I don't receive the same result. What am I missing?

import numpy as np
import skimage.transform
import cv2

img = np.random.rand(16,16)

angle = np.deg2rad(45)
cos_a, sin_a = np.cos(angle), np.sin(angle)
R = np.array([[cos_a, sin_a, -11 * (cos_a + sin_a - 1)],
             [-sin_a, cos_a, -11 * (cos_a - sin_a - 1)],
             [0, 0, 1]])

skimage_rotated = skimage.transform.warp(img, R, clip=False)
cv2_rotated = cv2.warpPerspective(img, R, dsize=skimage_rotated.shape)

print(np.count_nonzero(np.where(np.abs(skimage_rotated - cv2_rotated) > 1e-1)))) #356

plt.imshow(np.abs(skimage_rotated - cv2_rotated))
plt.show()

np.abs(skimage_rotated - cv2_rotated)


Solution

  • The issue is that skimage.transform.warp performs backward transformation and cv2.warpPerspective performs forward transformation.

    Passing cv2.WARP_INVERSE_MAP flag to cv2.warpPerspective solves the issue. (we may also invent the transformation matrix):

    cv2_rotated = cv2.warpPerspective(img, R, dsize=skimage_rotated.shape, flags=cv2.INTER_LINEAR+cv2.WARP_INVERSE_MAP)
    

    Documentation of skimage.transform.warp:

    skimage.transform.warp(image, inverse_map, map_args=None, ...
    inverse_maptransformation object, callable cr = f(cr, **kwargs), or ndarray
    Inverse coordinate map, which transforms coordinates in the output images into their corresponding coordinates in the input image.

    Documentation of cv2.warpPerspective:

    The function warpPerspective transforms the source image...
    when the flag WARP_INVERSE_MAP is set. Otherwise, the transformation is first inverted with invert and then put in the formula above instead of M.


    Code sample:

    import numpy as np
    import skimage.transform
    import cv2
    from matplotlib import pyplot as plt
    
    img = np.random.rand(16,16).astype(np.float32)
    
    angle = np.deg2rad(45)
    cos_a, sin_a = np.cos(angle), np.sin(angle)
    R = np.array([[cos_a, sin_a, -11 * (cos_a + sin_a - 1)],
                 [-sin_a, cos_a, -11 * (cos_a - sin_a - 1)],
                 [0, 0, 1]])
    
    skimage_rotated = skimage.transform.warp(img, R, clip=False)
    
    #invR = np.linalg.inv(R)
    #cv2_rotated = cv2.warpPerspective(img, invR, dsize=skimage_rotated.shape)
    cv2_rotated = cv2.warpPerspective(img, R, dsize=skimage_rotated.shape, flags=cv2.INTER_LINEAR+cv2.WARP_INVERSE_MAP)
    
    print(np.count_nonzero(np.where(np.abs(skimage_rotated - cv2_rotated) > 1e-1))) #0
    
    fig, ax = plt.subplots()
    im = ax.imshow(np.abs(skimage_rotated - cv2_rotated))
    fig.colorbar(im)
    plt.show()
    

    np.abs(skimage_rotated - cv2_rotated):

    enter image description here