iosswiftalphacgcontextcvpixelbuffer

UIImage obtaining CVPixelBuffer Removes Alpha


The function below takes in an UIImage and returns a CVPixelBuffer from the UIImage but it removes the alpha channel.

class func pixelBufferFromImage(image: UIImage, pixelBufferPool: CVPixelBufferPool, size: CGSize) -> CVPixelBuffer {

        var pixelBufferOut: CVPixelBuffer?

        let status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferPool, &pixelBufferOut)
        if status != kCVReturnSuccess {
            fatalError("CVPixelBufferPoolCreatePixelBuffer() failed")
        }

        let pixelBuffer = pixelBufferOut!

        CVPixelBufferLockBaseAddress(pixelBuffer, [])

        let data = CVPixelBufferGetBaseAddress(pixelBuffer)
        let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
        let context = CGContext(data: data, width: Int(size.width), height: Int(size.height),
                                bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue)

        context!.clear(CGRect(x: 0, y: 0, width: size.width, height: size.height))

        let horizontalRatio = size.width / image.size.width
        let verticalRatio = size.height / image.size.height
        //aspectRatio = max(horizontalRatio, verticalRatio) // ScaleAspectFill
        let aspectRatio = min(horizontalRatio, verticalRatio) // ScaleAspectFit

        let newSize = CGSize(width: image.size.width * aspectRatio, height: image.size.height * aspectRatio)

        let x = newSize.width < size.width ? (size.width - newSize.width) / 2 : 0
        let y = newSize.height < size.height ? (size.height - newSize.height) / 2 : 0

        context!.draw(image.cgImage!, in: CGRect(x: x, y: y, width: newSize.width, height: newSize.height))


        CVPixelBufferUnlockBaseAddress(pixelBuffer, [])

        return pixelBuffer
    }
  1. I know the initial image has some pixels with a specific alpha = 0 because if I do po image.pixelColor(atLocation: CGPoint(x: 0, y: 0)) it prints

Optional - some : UIExtendedSRGBColorSpace 0 0 0 0

but the resulting image has a black background.

I also tried using CGImageAlphaInfo.premultipliedLast.rawValue but that results in the image being blue, so I assume the RGBA to ARGB are getting swapped. But, ironically, that would mean that B in ARGB is 255 which would indicate that in RGBA that A would be 255, which it should be 0.

How can I correctly get the UIImage alpha to translate into the CVPixelBuffer when using CGContext?

Edit 1:

Here is my code for my pixelBufferPool.

func createPixelBufferAdaptor() {
            let pixelFormatRGBA = kCVPixelFormatType_32RGBA //Fails
            let pixelFormatARGB = kCVPixelFormatType_32ARGB //Works
            let sourcePixelBufferAttributesDictionary = [
                kCVPixelBufferPixelFormatTypeKey as String: NSNumber(value: pixelFormatARGB),
                kCVPixelBufferWidthKey as String: NSNumber(value: Float(renderSettings.width)),
                kCVPixelBufferHeightKey as String: NSNumber(value: Float(renderSettings.height))
            ]
            pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoWriterInput,
                                                                      sourcePixelBufferAttributes: sourcePixelBufferAttributesDictionary)
        }

It only works when I use kCVPixelFormatType_32ARGB. What I mean by work is that when I attempt to use the bufferpool

let status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferPool, &pixelBufferOut)
        if status != kCVReturnSuccess {
            fatalError("CVPixelBufferPoolCreatePixelBuffer() failed")
        }

this fails if I use the RGBA version but works if I use the ARGB version.


Solution

  • Well, sadly, my scope for this topic was too narrow (ironic since most questions are considered too broad).

    I was trying to create a video from a AVCaptureSession alongside AVAssetWriter and some of the images had a Depth Map that I would then produce to turn some of the pixels to a clear color. Well, with some research I found out that for images and displaying them, the alpha channel is maintained. But, for videos, this is impossible.

    How do you play a video with alpha channel using AVFoundation? https://andreygordeev.com/2017/07/01/video-with-transparent-background-ios/ https://github.com/aframevr/aframe/issues/3205 https://www.reddit.com/r/swift/comments/3cw4le/is_it_possible_to_play_a_video_with_an_alpha/

    Anywho, my next goal is to do a ChromaKey as described by some of the answers - either that or a masking from the depth map applied to my features.