We're using avcodec to decode H.264, and in some circumstances, after changing the resolution, avcodec gets confused, and gives two different sizes for the decoded frame:
if (av_init_packet_dll)
av_init_packet_dll(&avpkt);
avpkt.data = pBuffer;
avpkt.size = lBuffer;
// Make sure the output frame has NULLs for the data lines
pAVFrame->data[0] = NULL;
pAVFrame->data[1] = NULL;
pAVFrame->data[2] = NULL;
pAVFrame->data[3] = NULL;
res = avcodec_decode_video2_dll(pCodecCtx, pAVFrame, &FrameFinished, &avpkt);
DEBUG_LOG("Decoded frame: %d, %d, resulting dimensions: context: %dx%d, frame: %dx%d\n", res, FrameFinished, pCodecCtx->width, pCodecCtx->height, pAVFrame->width, pAVFrame->height);
if (pCodecCtx->width != pAVFrame->width || pCodecCtx->height != pAVFrame->height) {
OutputDebugStringA("Size mismatch, ignoring frame!\n");
FrameFinished = 0;
}
if (FrameFinished == 0)
OutputDebugStringA("Unfinished frame\n");
This results in this log (with some surrounding lines):
[5392] Decoded frame: 18690, 1, resulting dimensions: context: 640x480, frame: 640x480
[5392] Set dimensions to 640x480 in DecodeFromMap
[5392] checking size 640x480 against 640x480
[5392] Drawing 640x480, 640x480, 640x480, 0x05DB0060, 0x05DFB5C0, 0x05E0E360, 0x280, to surface 0x03198100, 1280x800
[5392] Drawing 640x480, 640x480, 640x480, 0x05DB0060, 0x05DFB5C0, 0x05E0E360, 0x280, to surface 0x03198100, 1280x800
[5392] Delayed frames seen. Reenabling low delay requires a codec flush.
[5392] Reinit context to 1280x800, pix_fmt: yuvj420p
*[5392] Decoded frame: 54363, 1, resulting dimensions: context: 1280x800, frame: 640x480
[5392] Set dimensions to 1280x800 in DecodeFromMap
[5392] checking size 1280x800 against 640x480
[5392] Found adapter NVIDIA GeForce GTX 650 ({D7B71E3E-4C86-11CF-4E68-7E291CC2C435}) on monitor 00020003
[5392] Found adapter NVIDIA GeForce GTX 650 ({D7B71E3E-4C86-11CF-4E68-7E291CC2C435}) on monitor FA650589
[5392] Creating Direct3D interface on adapter 1 at 1280x800 window 0015050C
[5392] Direct3D created using hardware vertex processing on HAL.
[5392] Creating D3D surface of 1280x800
[5392] Result 0x00000000, got surface 0x03210C40
[5392] Drawing 1280x800, 1280x800, 640x480, 0x02E3B0A0, 0x02E86600, 0x02E993A0, 0x280, to surface 0x03210C40, 1280x800
The line where this breaks is marked with a *
. pAVFrame
contains the old frame dimensions, while pCodecCtx
contains the new dimensions. When the drawing code than tries to access the data as a 1280x800 image, it hits an access violation.
When going down a size, avcodec transitions correctly, and sets FrameFinished
to 0 and leaves pAVFrame
resolution at 0x0.
Can anyone think what is causing this, why avcodec is reporting success, yet not doing anything, and what I can do to correctly resolve this?
For now, the mismatch check is protecting against this.
The avcodec in use is built from git-5cba529 by Zeranoe.
FFmpeg version: 2015-03-31 git-5cba529
libavutil 54. 21.100 / 54. 21.100
libavcodec 56. 32.100 / 56. 32.100
AVCodecContext.width/height is not guaranteed to be identical to AVFrame.width/height. For any practical purpose, use AVFrame.width/height.
AVCodecContext.width/height is the size of the current state of the decoder, which may be several frames ahead of the AVFrame being returned to the user. Example: let's assume that you have a display sequence of IBPBP in any MPEG-style codec, which is coded as IPBPB. Let's assume that there was scalability, so each frame has a different size. When the P is consumed, it's not yet returned, but an earlier frame is returned instead. In this example, when P1 is decoded, nothing is returned, when B1 is decoded, it is returned (before P1), and when P2 is decoded, P1 is returned. If each P had a different size, this means when you're decoding P2, P1 is returned to the user, and thus AVCodecContext.w/h and AVFrame.w/h are different (since one reflects P2, yet the other reflects P1). Another example where this happens is when frame-level multithreading is enabled.
In all cases, rely on AVFrame.width/height, and ignore AVCodecContext.width/height.