I am using React Native Vision Camera and React Native Fast OpenCV. I have successfully obtained a PointVector with the corner points of a document found in an image captured by the Vision Camera.
Now I want to create another PointVector that represents the four corners of the phone's screen or the original image (for example, (0,0), (width,0), (width,height), (0,height)). I want to use this new PointVector in the getPerspectiveTransform function, but I can't figure out how to create it.
I've checked the documentation for React Native Fast OpenCV but haven't found a solution. Is there any way to achieve this?
Any guidance or suggestions would be greatly appreciated.
That's how my function for detecting contours and drawing edges looks:
const testContours = () => {
const image = OpenCV.base64ToMat(capturedImage!.base64);
let output_image = OpenCV.createObject(
ObjectType.Mat,
frameDimensions.value!.height,
frameDimensions.value!.width,
DataTypes.CV_8U
);
let kernel = OpenCV.createObject(ObjectType.Size, 5, 5);
OpenCV.invoke(
'cvtColor',
image,
output_image,
ColorConversionCodes.COLOR_BGR2GRAY
);
OpenCV.invoke(
'threshold',
output_image,
output_image,
165,
250,
ThresholdTypes.THRESH_BINARY
);
OpenCV.invoke('GaussianBlur', output_image, output_image, kernel, 0);
const contours = OpenCV.createObject(ObjectType.MatVector);
OpenCV.invoke(
'findContours',
output_image,
contours,
RetrievalModes.RETR_TREE,
ContourApproximationModes.CHAIN_APPROX_SIMPLE
);
const contours_mat = OpenCV.toJSValue(contours);
let largest_contour_area = 0;
let largest_contour: PointVector | undefined;
for (let i = 0; i < contours_mat.array.length; i++) {
const contour = OpenCV.copyObjectFromVector(contours, i);
const hull = OpenCV.createObject(ObjectType.Mat, 0, 0, DataTypes.CV_8U);
OpenCV.invoke('convexHull', contour, hull);
const epsilon = OpenCV.invoke('arcLength', hull, true);
const approx = OpenCV.createObject(ObjectType.PointVector);
OpenCV.invoke('approxPolyDP', hull, approx, 0.02 * epsilon.value, true);
const { value: area } = OpenCV.invoke('contourArea', approx, false);
if (area > largest_contour_area) {
largest_contour_area = area;
largest_contour = approx;
}
}
if (largest_contour) {
let n_points = OpenCV.toJSValue(largest_contour).array.length;
for (let i = 0; i < n_points - 1; i++) {
OpenCV.invoke(
'line',
image,
OpenCV.copyObjectFromVector(largest_contour, i),
OpenCV.copyObjectFromVector(largest_contour, i + 1),
OpenCV.createObject(ObjectType.Scalar, 0, 255, 0),
3,
LineTypes.FILLED
);
}
OpenCV.invoke(
'line',
image,
OpenCV.copyObjectFromVector(largest_contour, n_points - 1),
OpenCV.copyObjectFromVector(largest_contour, 0),
OpenCV.createObject(ObjectType.Scalar, 0, 255, 0),
3,
LineTypes.FILLED
);
OpenCV.invoke('rotate', image, image, RotateFlags.ROTATE_90_CLOCKWISE);
setSavedImage(OpenCV.toJSValue(image).base64);
}
OpenCV.clearBuffers();
};
After rotation I'd like to create a matrix for perspective transform and then apply transformation using that matrix.
React Native Vision Camera version: 4.5.3
React Native Fast OpenCV version: 0.3.2
there is some explanation in the react-native-fast-opencv
repository how to use getPerspectiveTransform
. Please check an example below:
const matrixPtr = OpenCV.invoke(
'getPerspectiveTransform',
OpenCV.createObject(
ObjectType.Point2fVector,
points.value.map(p =>
OpenCV.createObject(ObjectType.Point2f, p.x, p.y),
),
),
OpenCV.createObject(
ObjectType.Point2fVector,
[
{x: 0, y: 0},
{x: warpedWidth, y: 0},
{x: warpedWidth, y: warpedHeight},
{x: 0, y: warpedHeight},
].map(p => OpenCV.createObject(ObjectType.Point2f, p.x, p.y)),
),
DecompTypes.DECOMP_LU,
);
const destPtr = OpenCV.createObject(ObjectType.Mat, 0, 0, DataTypes.CV_8U);
OpenCV.invoke(
'warpPerspective',
srcPtr,
destPtr,
matrixPtr,
OpenCV.createObject(ObjectType.Size, warpedWidth, warpedHeight),
InterpolationFlags.INTER_LINEAR,
BorderTypes.BORDER_CONSTANT,
OpenCV.createObject(ObjectType.Scalar, 0),
);
Source: https://github.com/lukaszkurantdev/react-native-fast-opencv/issues/13#issuecomment-2395165024
I hope it will be helpful for you.