iosobjective-cavassetwriteravassetreader

Read AVAsset into frames and compile back to video


everyone! The idea in my project is to gather a video, split it into frames, then apply specific effect frame by frame(not the topic) and then compile everything back.

Using following code I am able to read the video into frames and pass them into array:

//initialize avassetreader
AVAsset *avAsset = [AVAsset assetWithURL:url];
NSError *error = nil;
AVAssetReader *reader = [[AVAssetReader alloc] initWithAsset:avAsset error:&error];

//get video track
NSArray *videoTracks = [avAsset tracksWithMediaType:AVMediaTypeVideo];
AVAssetTrack *videoTrack = [videoTracks objectAtIndex:0];

NSDictionary *options = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];

AVAssetReaderTrackOutput *asset_reader_output = [[AVAssetReaderTrackOutput alloc] initWithTrack:videoTrack outputSettings:options];

[reader addOutput:asset_reader_output];
[reader startReading];

Next, as I said above I read the track frame by frame

//while there is something to read
while ( [reader status]==AVAssetReaderStatusReading ) { 


    CMSampleBufferRef sampleBuffer = [asset_reader_output copyNextSampleBuffer];
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

    // Lock the base address of the pixel buffer
    CVPixelBufferLockBaseAddress(imageBuffer, 0);

    // Get the number of bytes per row for the pixel buffer
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);

    // Get the pixel buffer width and height
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);

    //Generate image to edit
    unsigned char* pixel = (unsigned char *)CVPixelBufferGetBaseAddress(imageBuffer);
    CGColorSpaceRef colorSpace=CGColorSpaceCreateDeviceRGB();
    CGContextRef context=CGBitmapContextCreate(pixel, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little|kCGImageAlphaPremultipliedFirst);

    CGImageRef image = CGBitmapContextCreateImage(context);
    UIImage* myImage = [[UIImage alloc] initWithCGImage:image];
    [photos addObject:myImage];

}

I am able to put everything into an array but I need to compile everything back into a video track, compile it with audio from original track an then save. However, I couldn't find any useful information on the web. Please, help!

Thanks in advance!


Solution

  • Once you have an array of images, AVAssetWriter is your friend.