iphoneiosopengl-esopengl-es-2.0

Faster alternative to glReadPixels in iPhone OpenGL ES 2.0


Is there any faster way to access the frame buffer than using glReadPixels? I would need read-only access to a small rectangular rendering area in the frame buffer to process the data further in CPU. Performance is important because I have to perform this operation repeatedly. I have searched the web and found some approach like using Pixel Buffer Object and glMapBuffer but it seems that OpenGL ES 2.0 does not support them.


Solution

  • As of iOS 5.0, there is now a faster way to grab data from OpenGL ES. It isn't readily apparent, but it turns out that the texture cache support added in iOS 5.0 doesn't just work for fast upload of camera frames to OpenGL ES, but it can be used in reverse to get quick access to the raw pixels within an OpenGL ES texture.

    You can take advantage of this to grab the pixels for an OpenGL ES rendering by using a framebuffer object (FBO) with an attached texture, with that texture having been supplied from the texture cache. Once you render your scene into that FBO, the BGRA pixels for that scene will be contained within your CVPixelBufferRef, so there will be no need to pull them down using glReadPixels().

    This is much, much faster than using glReadPixels() in my benchmarks. I found that on my iPhone 4, glReadPixels() was the bottleneck in reading 720p video frames for encoding to disk. It limited the encoding from taking place at anything more than 8-9 FPS. Replacing this with the fast texture cache reads allows me to encode 720p video at 20 FPS now, and the bottleneck has moved from the pixel reading to the OpenGL ES processing and actual movie encoding parts of the pipeline. On an iPhone 4S, this allows you to write 1080p video at a full 30 FPS.

    My implementation can be found within the GPUImageMovieWriter class within my open source GPUImage framework, but it was inspired by Dennis Muhlestein's article on the subject and Apple's ChromaKey sample application (which was only made available at WWDC 2011).

    I start by configuring my AVAssetWriter, adding an input, and configuring a pixel buffer input. The following code is used to set up the pixel buffer input:

    NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
                                                           [NSNumber numberWithInt:videoSize.width], kCVPixelBufferWidthKey,
                                                           [NSNumber numberWithInt:videoSize.height], kCVPixelBufferHeightKey,
                                                           nil];
    
    assetWriterPixelBufferInput = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
    

    Once I have that, I configure the FBO that I'll be rendering my video frames to, using the following code:

    if ([GPUImageOpenGLESContext supportsFastTextureUpload])
    {
        CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge void *)[[GPUImageOpenGLESContext sharedImageProcessingOpenGLESContext] context], NULL, &coreVideoTextureCache);
        if (err) 
        {
            NSAssert(NO, @"Error at CVOpenGLESTextureCacheCreate %d");
        }
    
        CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &renderTarget);
    
        CVOpenGLESTextureRef renderTexture;
        CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault, coreVideoTextureCache, renderTarget,
                                                      NULL, // texture attributes
                                                      GL_TEXTURE_2D,
                                                      GL_RGBA, // opengl format
                                                      (int)videoSize.width,
                                                      (int)videoSize.height,
                                                      GL_BGRA, // native iOS format
                                                      GL_UNSIGNED_BYTE,
                                                      0,
                                                      &renderTexture);
    
        glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture));
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    
        glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);
    }
    

    This pulls a pixel buffer from the pool associated with my asset writer input, creates and associates a texture with it, and uses that texture as a target for my FBO.

    Once I've rendered a frame, I lock the base address of the pixel buffer:

    CVPixelBufferLockBaseAddress(pixel_buffer, 0);
    

    and then simply feed it into my asset writer to be encoded:

    CMTime currentTime = CMTimeMakeWithSeconds([[NSDate date] timeIntervalSinceDate:startTime],120);
    
    if(![assetWriterPixelBufferInput appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) 
    {
        NSLog(@"Problem appending pixel buffer at time: %lld", currentTime.value);
    } 
    else 
    {
    //        NSLog(@"Recorded pixel buffer at time: %lld", currentTime.value);
    }
    CVPixelBufferUnlockBaseAddress(pixel_buffer, 0);
    
    if (![GPUImageOpenGLESContext supportsFastTextureUpload])
    {
        CVPixelBufferRelease(pixel_buffer);
    }
    

    Note that at no point here am I reading anything manually. Also, the textures are natively in BGRA format, which is what AVAssetWriters are optimized to use when encoding video, so there's no need to do any color swizzling here. The raw BGRA pixels are just fed into the encoder to make the movie.

    Aside from the use of this in an AVAssetWriter, I have some code in this answer that I've used for raw pixel extraction. It also experiences a significant speedup in practice when compared to using glReadPixels(), although less than I see with the pixel buffer pool I use with AVAssetWriter.

    It's a shame that none of this is documented anywhere, because it provides a huge boost to video capture performance.