objective-ccalayerwebcamcgimagerefcmsamplebufferref

Converting CVImageBufferRef YUV 420 to cv::Mat RGB and displaying it in a CALayer?


Once that I can't get successfully anything different of YCbCr 420 from the camera (https://stackoverflow.com/questions/19673770/objective-c-avcapturevideodataoutput-videosettings-how-to-identify-which-pix)

So, my goal is to display this CMSampleBufferRef (YCbCr 420), after process with opencv, as a colored frame (CGImageRef, using RGB color model) in the CALayer.


Solution

  • In the camera capture file I put this:

    #define clamp(a) (a>255?255:(a<0?0:a));
    
    cv::Mat* YUV2RGB(cv::Mat *src){
        cv::Mat *output = new cv::Mat(src->rows, src->cols, CV_8UC4);
        for(int i=0;i<output->rows;i++)
            for(int j=0;j<output->cols;j++){
                // from Wikipedia
                int c = src->data[i*src->cols*src->channels() + j*src->channels() + 0] - 16;
                int d = src->data[i*src->cols*src->channels() + j*src->channels() + 1] - 128;
                int e = src->data[i*src->cols*src->channels() + j*src->channels() + 2] - 128;
    
                output->data[i*src->cols*src->channels() + j*src->channels() + 0] = clamp((298*c+409*e+128)>>8);
                output->data[i*src->cols*src->channels() + j*src->channels() + 1] = clamp((298*c-100*d-208*e+128)>>8);
                output->data[i*src->cols*src->channels() + j*src->channels() + 2] = clamp((298*c+516*d+128)>>8);
            }
    
        return output;
    }
    
        -(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
    
    
                CVImageBufferRef imageBuffer =  CMSampleBufferGetImageBuffer(sampleBuffer);
                CVPixelBufferLockBaseAddress(imageBuffer, 0);
    
                size_t width = CVPixelBufferGetWidth(imageBuffer);
                size_t height = CVPixelBufferGetHeight(imageBuffer);
    
                uint8_t *baseAddress = (uint8_t*)CVPixelBufferGetBaseAddress(imageBuffer);
                CVPlanarPixelBufferInfo_YCbCrBiPlanar *bufferInfo = (CVPlanarPixelBufferInfo_YCbCrBiPlanar *)baseAddress;
    
                NSUInteger yOffset = EndianU32_BtoN(bufferInfo->componentInfoY.offset);
                NSUInteger yPitch = EndianU32_BtoN(bufferInfo->componentInfoY.rowBytes);
    
                NSUInteger cbCrOffset = EndianU32_BtoN(bufferInfo->componentInfoCbCr.offset);
                NSUInteger cbCrPitch = EndianU32_BtoN(bufferInfo->componentInfoCbCr.rowBytes);
    
                uint8_t *yBuffer = baseAddress + yOffset;
                uint8_t *cbCrBuffer = baseAddress + cbCrOffset;
    
                cv::Mat *src = new cv::Mat((int)(height), (int)(width), CV_8UC4);
    
                //YUV -> cv::Mat
    
                for(int i = 0; i< height; i++)
                {
                    uint8_t *yBufferLine = &yBuffer[i * yPitch];
                    uint8_t *cbCrBufferLine = &cbCrBuffer[(i >> 1) * cbCrPitch];
    
                    for(int j = 0; j < width; j++)
                    {
                        uint8_t y = yBufferLine[j];
                        uint8_t cb = cbCrBufferLine[j & ~1];
                        uint8_t cr = cbCrBufferLine[j | 1];
    
                        src->data[i*width*src->channels() + j*src->channels() + 0] = y;
                        src->data[i*width*src->channels() + j*src->channels() + 1] = cb;
                        src->data[i*width*src->channels() + j*src->channels() + 2] = cr;
                    }
                }
    
    
                cv::Mat *output = YUV2RGB(src);
    
                CGColorSpaceRef grayColorSpace = CGColorSpaceCreateDeviceRGB();
                CGContextRef context = CGBitmapContextCreate(output->data, output->cols, output->rows, 8, output->step, grayColorSpace, kCGImageAlphaNoneSkipLast);
                CGImageRef dstImage = CGBitmapContextCreateImage(context);
    
                dispatch_sync(dispatch_get_main_queue(), ^{
                    customPreviewLayer.contents = (__bridge id)dstImage;
                });
    
                CGImageRelease(dstImage);
                CGContextRelease(context);
                CGColorSpaceRelease(grayColorSpace);
    
    
                output->release();
                src->release();
    
            }
    

    I got some issues (I was really stuck) trying to get to this point, I will describe here. It might be someone else's problem as well:

    1. That clamp function is essential to ensure the correct conversion YUV->RGB
    2. For some reason I couldn't just keep 3 channels on the image data, it somehow was causing a problem when it was about to display the image. So I changed kCGImageAlphaNone to kCGImageAlphaNoneSkipLast in the CGBitmapContextCreate. Also, I used 4 channels in the cv::Mat constructor.

    Part of this code I adapted from kCVPixelFormatType_420YpCbCr8BiPlanarFullRange frame to UIImage conversion