I have the following code to perform a perspective correction, but the result is an image stretched in the small X coordinates and shrunk in the large ones, as can be seen in the following images.
If the transformation were correct, the vertical green line in the center of the image should also be in the middle of the X axis in the resulting image, but this is not the case. It is clearly seen that the left part is stretched and the right part is shrunk.
Original image indicating the conversion ->
Output image ->
the code is this
import org.opencv.core.*;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
public class ImagePerspective {
static { System.loadLibrary(Core.NATIVE_LIBRARY_NAME); }
/**
* Transforms an image using four points, adjusting the size of the output image to be
* as large as possible based on the points and the aspect ratio.
*
* @param fileIn Path to the input image.
* @param points Four points in the original image defined in the following order:
* - (x1, y1): top-left
* - (x2, y2): top-right
* - (x3, y3): bottom-right
* - (x4, y4): bottom-left
* @param fileOut Path where the transformed image will be saved.
* @param aspectRatio Width/height ratio of the output image. <=0 to maintain the original ratio.
*/
public static void transformImage(String fileIn, Point[] points, String fileOut, int finalWidth, int finalHeight) {
// load original image
Mat imageOrig = Imgcodecs.imread(fileIn);
if (imageOrig.empty()) {
throw new IllegalArgumentException("Can not read image: " + fileIn);
}
log("Original size: " + imageOrig.width() + "x" + imageOrig.height());
log("Final size : " + finalWidth + "x" + finalHeight);
// Define the destination points
MatOfPoint2f pointsDest = new MatOfPoint2f(
new Point(0, 0), // top-left
new Point(finalWidth - 1, 0), // top-right
new Point(finalWidth - 1, finalHeight - 1), // bottom-right
new Point(0, finalHeight - 1) // bottom-left
);
// Define the original points
MatOfPoint2f pointsOrig = new MatOfPoint2f(points);
// Calculate the perspective transformation matrix
Mat transformationMat = Imgproc.getPerspectiveTransform(pointsOrig, pointsDest);
// Create the transformed image with white background
Mat imageTransformed = new Mat(finalHeight, finalWidth, imageOrig.type(), new Scalar(255, 255, 255));
// Apply transformation
Imgproc.warpPerspective( imageOrig, imageTransformed, transformationMat
, new Size(finalWidth, finalHeight), Imgproc.INTER_LINEAR
, Core.BORDER_CONSTANT, new Scalar(255, 255, 255)
);
// Save final image
boolean saved = Imgcodecs.imwrite(fileOut, imageTransformed);
if (saved) {
log("Image saved to " + fileOut);
} else {
log("Error saving image " + fileOut);
}
}
private static void log(String msg) {
System.out.println(msg);
}
public static void main(String[] args) {
String fileIn = "original.png";
String fileOut = "resul.png";
Point[] points = {
new Point(12, 0), // top-left
new Point(173, 13), // top-right
new Point(174, 141), // bottom-right
new Point(0, 64) // bottom-left
};
// Transforms the image
transformImage(fileIn, points, fileOut, 200, 100);
}
}
the code take an area of an image whose plane is not frontal and generate a new image equivalent to a frontal view.
The code works as specified.
Perspective projection is non-linear. Your expectation of perspective projection is mistaken.
What you expect corresponds to a screen-space mapping. That maintains length ratios in screen space.
Behold, some quick sketches of the similar triangles involved in perspective projection:
3D line and midpoint, projected onto screen:
Screen-space (2D) line and midpoint, and what it could correspond to in 3D: