In my Application I need to capture a video and Put a watermark on that video. The watermark should be Text(Time and Notes). I saw a code using "QTKit" Frame work. However I read that the framework is not available for iPhone.
Thanks in Advance.
Use AVFoundation
. I would suggest grabbing frames with AVCaptureVideoDataOutput
, then overlaying the captured frame with the watermark image, and finally writing captured and processed frames to a file user AVAssetWriter
.
Search around stack overflow, there are a ton of fantastic examples detailing how to do each of these things I have mentioned. I haven't seen any that give code examples for exactly the effect you would like, but you should be able to mix and match pretty easily.
EDIT:
Take a look at these links:
iPhone: AVCaptureSession capture output crashing (AVCaptureVideoDataOutput) - this post might be helpful just by nature of containing relevant code.
AVCaptureDataOutput
will return images as CMSampleBufferRef
s.
Convert them to CGImageRef
s using this code:
- (CGImageRef) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/* CVBufferRelease(imageBuffer); */ // do not call this!
return newImage;
}
From there you would convert to a UIImage,
UIImage *img = [UIImage imageWithCGImage:yourCGImage];
Then use
[img drawInRect:CGRectMake(x,y,height,width)];
to draw the frame to a context, draw a PNG of the watermark over it, and then add the processed images to your output video using AVAssetWriter
. I would suggest adding them in real time so you're not filling up memory with tons of UIImages.
How do I export UIImage array as a movie? - this post shows how to add the UIImages you have processed to a video for a given duration.
This should get you well on your way to watermarking your videos. Remember to practice good memory management, because leaking images that are coming in at 20-30fps is a great way to crash the app.