I have observed that there is a difference between rotation using cv.warpAffine method and the cv.rotate method. The resulting images are different - while cv.rotate is perfect i.e. rotated image with no parts of the image getting cutoff, cv.warpAffine generates an image that has part of the image cutoff. The example image and the results of the 2 methods are attached.
Configuration: python:3.9.13; opencv-python 4.7.0; Windows 10
Samples:
original:
cv.rotate:
cv.warpAffine:
Code to reproduce problem
>>> img = cv.imread(<path to image>, cv.IMREAD_GRAYSCALE)
>>> rot_img = cv.rotate(img, cv.ROTATE_90_COUNTERCLOCKWISE)
>>> cv.imwrite('cv_rotate.jpg', rot_img)
True
>>> img_center = (img.shape[1]//2, img.shape[0]//2)
>>> M = cv.getRotationMatrix2D(img_center, 90, 1)
>>> rot_img = cv.warpAffine(img, M, (img.shape[0], img.shape[1]))
>>> cv.imwrite('cv_warpAffine.jpg', rot_img)
True
I was expecting both the methods to generate the same output. Why is there a difference? I have tried a few other ways anticipating that I am messing up the (row, col) format with the (x, y) format (why OpenCV uses a different style even when producing ndarray output I don't know) but those did not work. Can someone please let me know what's the issue here?
If you only need 90 degree rotations, stick with cv.rotate()
. This rotates in steps of 90 degrees. It does not resample or interpolate.
In the comments, Cris Luengo suggests np.rot90()
, which potentially is an extremely cheap operation because numpy can just calculate new strides and give you a view on the original data. cv.rotate()
does not do that because the internally used cv::Mat
isn't as flexible as a numpy array.
You can achieve the desired result with cv.getRotationMatrix2D()
if the center of rotation is correct.
If you pick the center of the rectangle, that will just rotate the image around that point, and result in what you've already seen: the result is the green rectangle, with the red rectangle representing the image content.
If the top left origins of source and result coincide, the "correct" center of rotation for a 90 degree rotation of a rectangle depends on the rectangle's side lengths and the direction of the rotation. Consider two squares, one for each side length of the rectangle. The center of rotation lies in the center of one or the other square, depending on direction.
...
>>> ih, iw = img.shape[:2]
>>> img_center = (iw-1)/2, (iw-1)/2
>>> M = cv.getRotationMatrix2D(img_center, +90, 1)
...
>>> img_center = (ih-1)/2, (ih-1)/2
>>> M = cv.getRotationMatrix2D(img_center, -90, 1)
You can also build your own transformation from primitives:
def translate(tx=0, ty=0):
T = np.eye(3)
T[:2, 2] = (tx, ty)
return T
def rotate(angle):
T = np.eye(3)
T[:2, :] = cv.getRotationMatrix2D((0,0), angle, 1)
# it's rotating counterclockwise in X-right, Y-down
return T
angle = 90
(ih, iw) = img.shape[:2]
ow,oh = ih,iw # only valid for 90 degree rotation
icx, icy = (iw-1)/2, (ih-1)/2
ocx, ocy = (ow-1)/2, (oh-1)/2
T = translate(+ocx, +ocy) @ rotate(angle) @ translate(-icx, -icy)
rot_img = cv.warpAffine(img, M=T[:2], dsize=(ow, oh))
To warp points instead of images, there is the function cv.transform()
for affine transformations.
Points need to be given as a numpy array of shape (N, 2)
or (N, 1, 2)
.
For perspective transforms, there is cv.perspectiveTransform()
. It takes care of the "division by w".