openglffmpeglibavcodecavcodec

What is the best way to fill AVFrame.data


I want to transfer opengl framebuffer data to AVCodec as fast as possible.

I've already converted RGB to YUV with shader and read it with glReadPixels

I still need to fill AVFrame data manually. Is there any better way?

AVFrame *frame;
// Y
frame->data[0][y*frame->linesize[0]+x] = data[i*3];
// U
frame->data[1][y*frame->linesize[1]+x] = data[i*3+1];
// V
frame->data[2][y*frame->linesize[2]+x] = data[i*3+2];

Solution

  • You can use sws_scale.

    In fact, you don't need shaders for converting RGB->YUV. Believe me, it's not gonna have a very different performance.

    swsContext = sws_getContext(WIDTH, HEIGHT, AV_PIX_FMT_RGBA, WIDTH, HEIGHT, AV_PIX_FMT_YUV, SWS_BICUBIC, 0, 0, 0 );
    sws_scale(swsContext, (const uint8_t * const *)sourcePictureRGB.data, sourcePictureRGB.linesize, 0, codecContext->height, destinyPictureYUV.data, destinyPictureYUV.linesize);
    

    The data in destinyPictureYUV will be ready to go to the codec.

    In this sample, destinyPictureYUV is the AVFrame you want to fill up. Try to setup like this:

    AVFrame * frame;
    AVPicture destinyPictureYUV;
    
    avpicture_alloc(&destinyPictureYUV, codecContext->pix_fmt, newCodecContext->width, newCodecContext->height);
    
    // THIS is what you want probably
    *reinterpret_cast<AVPicture *>(frame) = destinyPictureYUV;
    

    With this setup you CAN ALSO fill up with the data you already converted to YUV in the GPU if you desire... you can choose the way you want.