swiftcore-image

How to use CIFilter with CIPerspectiveCorrection on a capture session?


I want to scan documents and fix any perspective issues with the phone like the notes App can do. It is all fine until I want to use CIFilter(name: "CIPerspectiveCorrection"), then I mess up the image and I am struggling to understand where I am going wrong.

I have tried to switch parameters and other filters, or rotate the image, but that didn't work for me.

Here is a small project I set up to test all this: https://github.com/iViktor/scanner

Basically I am running a VNDetectRectanglesRequeston the AVCaptureSession and saving the rectangle I get in private var targetRectangle = VNRectangleObservation()

That one I am using to recalculate the points inside the image and to run the filter on the image.

extension DocumentScannerViewController: AVCapturePhotoCaptureDelegate {
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
    guard let imageData = photo.fileDataRepresentation()
        else { return }
    guard let ciImage = CIImage(data: imageData, options: [.applyOrientationProperty : true]) else { return }
    let image = UIImage(ciImage: ciImage)

    let imageTopLeft: CGPoint = CGPoint(x: image.size.width * targetRectangle.bottomLeft.x, y: targetRectangle.bottomLeft.y * image.size.height)
    let imageTopRight: CGPoint = CGPoint(x: image.size.width * targetRectangle.bottomRight.x, y: targetRectangle.bottomRight.y * image.size.height)
    let imageBottomLeft: CGPoint = CGPoint(x: image.size.width * targetRectangle.topLeft.x, y: targetRectangle.topLeft.y * image.size.height)
    let imageBottomRight: CGPoint = CGPoint(x: image.size.width * targetRectangle.topRight.x, y: targetRectangle.topRight.y * image.size.height)

    let flattenedImage = image.flattenImage(topLeft: imageTopLeft, topRight: imageTopRight, bottomLeft: imageBottomLeft, bottomRight: imageBottomRight)
    let finalImage = UIImage(ciImage: flattenedImage, scale: image.scale, orientation: image.imageOrientation)

//performSegue(withIdentifier: "showPhoto", sender: image)
//performSegue(withIdentifier: "showPhoto", sender: UIImage(ciImage: flattenedImage))
    performSegue(withIdentifier: "showPhoto", sender: finalImage)

}
}

This is the code that is not working, and I'm struggling with:

extension UIImage {

func flattenImage(topLeft: CGPoint, topRight: CGPoint, bottomLeft: CGPoint, bottomRight: CGPoint) -> CIImage {
    let docImage = self.ciImage!
    let rect = CGRect(origin: CGPoint.zero, size: self.size)
    let perspectiveCorrection = CIFilter(name: "CIPerspectiveCorrection")!
    perspectiveCorrection.setValue(CIVector(cgPoint: self.cartesianForPoint(point: topLeft, extent: rect)), forKey: "inputTopLeft")
    perspectiveCorrection.setValue(CIVector(cgPoint: self.cartesianForPoint(point: topRight, extent: rect)), forKey: "inputTopRight")
    perspectiveCorrection.setValue(CIVector(cgPoint: self.cartesianForPoint(point: bottomLeft, extent: rect)), forKey: "inputBottomLeft")
    perspectiveCorrection.setValue(CIVector(cgPoint: self.cartesianForPoint(point: bottomRight, extent: rect)), forKey: "inputBottomRight")
    perspectiveCorrection.setValue(docImage, forKey: kCIInputImageKey)

    return perspectiveCorrection.outputImage!
}

func cartesianForPoint(point:CGPoint,extent:CGRect) -> CGPoint {
    return CGPoint(x: point.x,y: extent.height - point.y)
}
}

So in the end I want to scan a document, like a invoice and fix any user error like perspective issues automatically. Right now, the filter I add to the image results in a weird hand fan like effect.


Solution

  • Based on the comments I updated the code where I used the targetRectangle to instead use the points represented by the drawn path and changed where I used them for the image because CI uses a different coordinates system and the picture is mirrored.

    I updated

        private func startScanner() {
             ... ... ...
                   let request = VNDetectRectanglesRequest { req, error in
                        DispatchQueue.main.async {
                            if let observation = req.results?.first as? VNRectangleObservation {
                                let points = self.targetRectLayer.drawTargetRect(observation: observation, previewLayer: self.previewLayer, animated: false)
                                let size = self.scannerView.frame.size
                                self.trackedTopLeftPoint = CGPoint(x: points.topLeft.x / size.width, y: points.topLeft.y / size.height )
                                self.trackedTopRightPoint = CGPoint(x: points.topRight.x / size.width, y: points.topRight.y / size.height )
                                self.trackedBottomLeftPoint = CGPoint(x: points.bottomLeft.x / size.width, y: points.bottomLeft.y / size.height )
                                self.trackedBottomRightPoint = CGPoint(x: points.bottomRight.x / size.width, y: points.bottomRight.y / size.height )
                            } else {
                                _ = self.targetRectLayer.drawTargetRect(observation: nil, previewLayer: self.previewLayer, animated: false)
                            }
                        }
                    }
            }
    

    and

    extension DocumentScannerViewController: AVCapturePhotoCaptureDelegate {
    func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
        guard let imageData = photo.fileDataRepresentation()
            else { return }
        guard let ciImage = CIImage(data: imageData, options: [.applyOrientationProperty : true]) else { return }
        let image = UIImage(ciImage: ciImage)
    
        // CoreImage is working with cartesian coordinates, basically y:0 is in the bottom left corner
        let imageTopLeft: CGPoint = CGPoint(x: image.size.width * trackedBottomLeftPoint.x, y: trackedBottomLeftPoint.y * image.size.height)
        let imageTopRight: CGPoint = CGPoint(x: image.size.width * trackedTopLeftPoint.x, y: trackedTopLeftPoint.y * image.size.height)
        let imageBottomLeft: CGPoint = CGPoint(x: image.size.width * trackedBottomRightPoint.x, y: trackedBottomRightPoint.y * image.size.height)
        let imageBottomRight: CGPoint = CGPoint(x: image.size.width * trackedTopRightPoint.x, y: trackedTopRightPoint.y * image.size.height)
    
        let flattenedImage = image.flattenImage(topLeft: imageTopLeft, topRight: imageTopRight, bottomLeft: imageBottomLeft, bottomRight: imageBottomRight)
        let newCGImage = CIContext(options: nil).createCGImage(flattenedImage, from: flattenedImage.extent)
        let doneCroppedImage = UIImage(cgImage: newCGImage!, scale: image.scale, orientation: image.imageOrientation)
        performSegue(withIdentifier: "showPhoto", sender: doneCroppedImage)
    }
    }
    

    That fixed it.