The setup is as follows:
The app:
I am connecting ffmpeg to the UDP stream just fine and the packets are being buffered and seemingly decoded fine. The packet buffers are plenty, no under/over-flows. The problem I am facing is it appears to be chopping up frames (ie only playing back 1 out of every so many frames). I understand that I need to distinguish I/P/B frames, but at the moment, hands up, I ain't got a clue. I've even tried a hack to detect I frames to no avail. Plus, I am only rendering the frames to less than a quarter of the screen. So I'm not using full screen decoding.
The decoded frames are also stored in separate buffers to cut out page tearing. The number of buffers I've changed too, from 1 to 10 with no luck.
From what I've found about OpenMax IL, is it only handles MPeg2-TS Part 3 (H.264 and AAC), but you can use your own decoder. I understand that you can add your own decode component to it. Would it be worth me trying this route or should I continue on with ffmpeg?
The frame decoder (only the renderer will convert and scale the frames when ready) /* * This function will run through the packets and keep decoding * until a frame is ready first, or out of packets */
while (packetsUsed[decCurrent])
{
hack_for_i_frame:
i = avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packets[decCurrent]);
packetsUsed[decCurrent] = 0; // finished with this one
i = packets[decCurrent].flags & 0x0001;
decCurrent++;
if (decCurrent >= MAXPACKETS) decCurrent = 0;
if (frameFinished)
{
ready_pFrame = pFrame;
frameReady = true; // notify renderer
frameCounter++;
if (frameCounter>=MAXFRAMES) frameCounter = 0;
pFrame = pFrames[frameCounter];
return 0;
}
else if (i)
goto hack_for_i_frame;
}
return 0;
The packet reader (spawned as a pthread) void *mainPacketReader(void *voidptr) { int res;
while ( threadState == TS_RUNNING )
{
if (packetsUsed[prCurrent])
{
LOGE("Packet buffer overflow, dropping packet...");
av_read_frame( pFormatCtx, &packet );
}
else if ( av_read_frame( pFormatCtx, &packets[prCurrent] ) >= 0 )
{
if ( packets[prCurrent].stream_index == videoStream )
{
packetsUsed[prCurrent] = 1; // flag as used
prCurrent++;
if ( prCurrent >= MAXPACKETS )
{
prCurrent = 0;
}
}
// here check if the packet is audio and add to audio buffer
}
}
return NULL;
And the renderer just simply does this // texture has already been bound before calling this function
if ( frameReady == false ) return;
AVFrame *temp; // set to frame 'not' currently being decoded
temp = ready_pFrame;
sws_scale(sws_ctx,(uint8_t const* const *)temp->data,
temp->linesize, 0, pCodecCtx->height,
pFrameRGB->data, pFrameRGB->linesize);
glTexSubImage2D(GL_TEXTURE_2D, 0,
XPOS, YPOS, WID, HGT,
GL_RGBA, GL_UNSIGNED_BYTE, buffer);
frameReady = false;
In the past, libvlc had audio syncing problems too, so that is my decision for going with ffmpeg and doing all the donkey work from scratch.
If anybody has any pointers of how to stop the choppiness of the video playback (works great in VLC player) or possibly another route to go down, it would be seriously appreciated.
EDIT I removed the hack for the I-frame (completely useless). Move the sws_scale function from the renderer to the packet decoder. And I left the udp packet reader thread alone.
In the meantime I've also changed the packet reader thread and the packet decoder threads priority to real-time. Since doing that I don't get shed loads of dropped packets.
(after finally figuring where the answer button was)
The I-Frame hack was completely useless and to save on thread overload in the renderer the sws_scale was moved into the decoder thread.
I've also moved on from this by completely getting rid of the sws_scale and uploading each individual YUV frames to the gpu and using a fragment shader to convert to rgb.
Anyone interested in the shader for converting YUV to RGB here it is and very simple:
Vertex shader
attribute vec4 qt_Vertex;
attribute vec2 qt_InUVCoords;
attribute vec4 qt_InColor;
uniform mat4 qt_OrthoMatrix;
varying vec2 qt_TexCoord0;
varying vec4 qt_OutColor;
void main(void)
{
gl_Position = qt_OrthoMatrix * qt_Vertex;
qt_TexCoord0 = qt_InUVCoords;
qt_OutColor = qt_InColor;
}
Fragment shader:
precision mediump float;
uniform sampler2D qt_TextureY;
uniform sampler2D qt_TextureU;
uniform sampler2D qt_TextureV;
varying vec4 qt_OutColor;
varying vec2 qt_TexCoord0;
const float num1 = 1.403; // line 1
const float num2 = 0.344; // line 2
const float num3 = 0.714;
const float num4 = 1.770; // line 3
const float num5 = 1.0;
const float half1 = 0.5;
void main(void)
{
float y = texture2D(qt_TextureY, qt_TexCoord0).r;
float u = texture2D(qt_TextureU, qt_TexCoord0).r - half1;
float v = texture2D(qt_TextureV, qt_TexCoord0).r - half1;
gl_FragColor = vec4( y + num1 * v,
y - num2 * u - num3 * v,
y + num4 * u, num5) * qt_OutColor;
}