iosswiftcore-imagecifilterimagefilter

iOS Core Image CIPersonSegmentation filter crashes at context.createCGImage with EXC_BAD_ACCESS


I have the below simple code for CIPersonSegmentation

if let image = UIImage(named: "demo9"), let editted = applyPersonFilter(to: image) {
    imageView.image = editted
}

func applyPersonFilter(to image: UIImage) -> UIImage? {
    guard let ciImage = CIImage(image: image) else { return nil }
    
    let context = CIContext(options: nil)
    
    let filter = CIFilter(name: "CIPersonSegmentation", parameters: [
        kCIInputImageKey: ciImage,
        "inputQualityLevel": 1.0
    ])
    
    guard let outputImage = filter?.outputImage else {
        return nil
    }
    
    print("outputImage: \(outputImage.extent)")
    
    guard let cgImage = context.createCGImage(outputImage, from: outputImage.extent) else { return nil }
    
    return UIImage(cgImage: cgImage)
}

This simply crashes at createCGImage with:

Thread 1: EXC_BAD_ACCESS (code=EXC_I386_GPFLT)

enter image description here

Note that the print before context.createCGImage logs (0.0, 0.0, 512.0, 384.0).

If I replace the "CIPersonSegmentation" with some other filter name, then it works fine.

EDIT:

I suspect I may have figured out the problem. I was running the above code on the simulator when it was crashing. Then I came across this article when researching some other image editing:

https://www.artemnovichkov.com/blog/remove-background-from-image-in-swiftui

It says:

"The requests may fail. For example, handler throws an error if you try to run the code on an iOS Simulator: 🚫 Domain=com.apple.Vision Code=9 "Could not create inference context" UserInfo={NSLocalizedDescription=Could not create inference context}"

Note that the above is about the use of Vision API and not CIFilter. However, I suspected that the CIPersonSegmentation also uses Vision API behind the scenes. Hence the crash on simulator.

It works fine on device.

Can someone confirm my suspicion?


Solution

  • Can confirm.

    The simulator can't run ML models that rely on the Neural Engine or the GPU to run. If they can't run on the CPU, they can't run in Simulator. The CIPersonSegmentation filter most likely uses the VNGeneratePersonSegmentationRequest Vision request under the hood, which in turn uses ML.

    Our strategy here is to use #if targetEnvironment(simulator) to return nil or empty mask images to fail gracefully.