swiftuiimageavassetwriterciimagecvpixelbuffer

How can you make a CVPixelBuffer directly from a CIImage instead of a UIImage in Swift?


I am recording filtered video through an iPhone camera, and there is a huge increase in CPU usage when converting a CIImage to a UIImage in real time while recording. My buffer function to make a CVPixelBuffer uses a UIImage, which so far requires me to make this conversion. I'd like to instead make a buffer function that takes a CIImage if possible so I can skip the conversion from UIImage to CIImage. I'm thinking this will give me a huge boost in performance when recording video, since there won't be any hand off between CPU and GPU.

This is what I have right now. Within my captureOutput function, I create a UIImage from the CIImage, which is the filtered image. I create a CVPixelBuffer from the buffer function using the UIImage, and append it to the assetWriter's pixelBufferInput:

let imageUI = UIImage(ciImage: ciImage)

let filteredBuffer:CVPixelBuffer? = buffer(from: imageUI)

let success = self.assetWriterPixelBufferInput?.append(filteredBuffer!, withPresentationTime: self.currentSampleTime!)

My buffer function that uses a UIImage:

func buffer(from image: UIImage) -> CVPixelBuffer? {
    let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
    var pixelBuffer : CVPixelBuffer?
    let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(image.size.width), Int(image.size.height), kCVPixelFormatType_32ARGB, attrs, &pixelBuffer)

    guard (status == kCVReturnSuccess) else {
        return nil
    }

    CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
    let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer!)

    let videoRecContext = CGContext(data: pixelData,
                            width: Int(image.size.width),
                            height: Int(image.size.height),
                            bitsPerComponent: 8,
                            bytesPerRow: videoRecBytesPerRow,
                            space: (MTLCaptureView?.colorSpace)!, // It's getting the current colorspace from a MTKView
                            bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)

    videoRecContext?.translateBy(x: 0, y: image.size.height)
    videoRecContext?.scaleBy(x: 1.0, y: -1.0)

    UIGraphicsPushContext(videoRecContext!)
    image.draw(in: CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height))
    UIGraphicsPopContext()
    CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))

    return pixelBuffer
}

Solution

  • Create a CIContext and use it to render the CIImage directly to your CVPixelBuffer using CIContext.render(_: CIImage, to buffer: CVPixelBuffer).