iosobjective-cvideo-captureframe-ratecmsamplebufferref

Capture 120/240 fps using AVCaptureVideoDataOutput into frame buffer using low resolution


Currently, using the iPhone 5s/6 I am able to capture 120(iPhone 5s) or 240(iPhone 6) frames/second into a CMSampleBufferRef. However, the AVCaptureDeviceFormat that is returned to me only provides these high speed frame rates with a resolution of 1280x720.

I would like to capture this in lower resolution (640x480 or lower) since I will be putting this into a circular buffer for storage purpose. While I am able to reduce the resolution in the didOutputSampleBuffer delegate method, I would like to know if there is any way for the CMSampleBufferRef to provide me a lower resolution directly by configuring the device or setting, instead of taking the 720p image and lowering the resolution manually using CVPixelBuffer.

I need to store the images in a buffer for later processing and want to apply minimum processing necessary or else I will begin to drop frames. If I can avoid resizing and obtain a lower resolution CMSampleBuffer from the didOutputSampleBuffer delegate method directly, that would be ideal.

At 240fps, I would need to process each image within 5ms and the resizing routine cannot keep up with downscaling the image at this rate. However, I would like to store it into a circular buffer for later processing (e.g. writing out to a movie using AVAssetWriter) but require a lower resolution.

It seems that the only image size supported in high frame rate recording is 1280x720. Putting multiple images of this resolution into the frame buffer will generate memory pressure so I'm looking to capture a lower resolution image directly from didOutputSampleBuffer if it is at all possible to save on memory and to keep up with the frame rate.

Thank you for your assistance.


Solution

  • // core image use GPU to all image ops, crop / transform / ...
    
    // --- create once ---
    EAGLContext *glCtx = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
    CIContext *ciContext = [CIContext contextWithEAGLContext:glCtx options:@{kCIContextWorkingColorSpace:[NSNull null]}];
    // use rgb faster 3x
    CGColorSpaceRef ciContextColorSpace = CGColorSpaceCreateDeviceRGB();
    OSType cvPixelFormat = kCVPixelFormatType_32BGRA;
    
    // create compression session
    VTCompressionSessionRef compressionSession;
    NSDictionary* pixelBufferOptions = @{(__bridge NSString*) kCVPixelBufferPixelFormatTypeKey:@(cvPixelFormat),
                                         (__bridge NSString*) kCVPixelBufferWidthKey:@(outputResolution.width),
                                         (__bridge NSString*) kCVPixelBufferHeightKey:@(outputResolution.height),
                                         (__bridge NSString*) kCVPixelBufferOpenGLESCompatibilityKey : @YES,
                                         (__bridge NSString*) kCVPixelBufferIOSurfacePropertiesKey : @{}};
    
    OSStatus ret = VTCompressionSessionCreate(kCFAllocatorDefault,
                                              outputResolution.width,
                                              outputResolution.height,
                                              kCMVideoCodecType_H264,
                                              NULL,
                                              (__bridge CFDictionaryRef)pixelBufferOptions,
                                              NULL,
                                              VTEncoderOutputCallback,
                                              (__bridge void*)self,
                                              &compressionSession);
    
    CVPixelBufferRef finishPixelBuffer;
    // I'm use VTCompressionSession pool, you can use AVAssetWriterInputPixelBufferAdaptor
    CVReturn res = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, VTCompressionSessionGetPixelBufferPool(compressionSession), &finishPixelBuffer);
    // -------------------
    
    // ------ scale ------
    // new buffer comming...
    // - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
    
    CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);
    
    CIImage *baseImg = [CIImage imageWithCVPixelBuffer:pixelBuffer];
    CGFloat outHeight = 240;
    CGFloat scale = 1 / (CVPixelBufferGetHeight(pixelBuffer) / outHeight);
    CGAffineTransform transform = CGAffineTransformMakeScale(scale, scale);
    
    // result image not changed after
    CIImage *resultImg = [baseImg imageByApplyingTransform:transform];
    // resultImg = [resultImg imageByCroppingToRect:...];
    
    // CIContext applies transform to CIImage and draws to finish buffer
    [ciContext render:resultImg toCVPixelBuffer:finishPixelBuffer bounds:resultImg.extent colorSpace:ciContextColorSpace];
    CVPixelBufferUnlockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);
    
    // [videoInput appendSampleBuffer:CMSampleBufferCreateForImageBuffer(... finishPixelBuffer...)]
    VTCompressionSessionEncodeFrame(compressionSession, finishPixelBuffer, CMSampleBufferGetPresentationTimeStamp(sampleBuffer), CMSampleBufferGetDuration(sampleBuffer), NULL, sampleBuffer, NULL);
    // -------------------