I'm trying to add a watermark/logo on a video that I'm recording using AVFoundation's AVCaptureVideoDataOutput. My class is set as the sampleBufferDelegate and receives the CMSamplebufferRefs. I already apply some effects to the CMSampleBufferRefs CVPixelBuffer and pass it back to the AVAssetWriter.
The logo in the top left corner is delivered using a transparent PNG. The problem I'm having is that the transparent parts of the UIImage are black once written to the video. Anyone have an idea of what I'm doing wrong or could be forgetting?
Code snippets below:
//somewhere in the init of the class;
_eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
_ciContext = [CIContext contextWithEAGLContext:_eaglContext
options: @{ kCIContextWorkingColorSpace : [NSNull null] }];
//samplebufferdelegate method:
- (void) captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
....
UIImage *logoImage = [UIImage imageNamed:@"logo.png"];
CIImage *renderImage = [[CIImage alloc] initWithCGImage:logoImage.CGImage];
CGColorSpaceRef cSpace = CGColorSpaceCreateDeviceRGB();
[_ciContext render:renderImage
toCVPixelBuffer:pixelBuffer
bounds: [renderImage extent]
colorSpace:cSpace];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CGColorSpaceRelease(cSpace);
....
}
It looks like the CIContext does not draw the CIImages alpha. Any ideas?
For developers who've encountered the same issue:
It appears anything rendered on the GPU and written to the video ends up making a black hole in the video. Instead, I removed the above code, created a CGContextRef, like you would do when editing images, and drew on that context.
Code:
....
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
CGContextRef context = CGBitmapContextCreate(CVPixelBufferGetBaseAddress(pixelBuffer),
CVPixelBufferGetWidth(pixelBuffer),
CVPixelBufferGetHeight(pixelBuffer),
8,
CVPixelBufferGetBytesPerRow(pixelBuffer),
CGColorSpaceCreateDeviceRGB(),
(CGBitmapInfo)
kCGBitmapByteOrder32Little |
kCGImageAlphaPremultipliedFirst);
CGRect renderBounds = ...
CGContextDrawImage(context, renderBounds, [overlayImage CGImage]);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CGColorSpaceRelease(cSpace);
....
And off course, the global EAGLContext
and the CIContext
are not needed anymore.