cgstreamertfliteimx8

GStreamer Pipeline on iMX8 platform. Faced the `lost frames detected` issue


I have developed a custom GStreamer component infer that runs a neural network model. This component takes approximately 100ms to process each frame, leading to an expected frame rate of around 10 fps. However, I am facing an issue where the pipeline reports lost frames, resulting in an effective frame rate of about 1 fps. The error message is:

0:00:47.405558285 4623 0xaaab095005e0 WARN v4l2src gstv4l2src.c:1352:gst_v4l2src_create:<v4l2src0> lost frames detected: count = 21 - ts: 0:00:46.169338790

Here is the pipeline configuration I am using:

webrtcbin bundle-policy=max-bundle latency=0 name=sendrecv \
v4l2src ! capsfilter caps="video/x-raw, width=1920, height=1080, framerate=25/1" ! videoconvert ! infer use_npu=True type=yolov5 model=yolov5n.tflite labels=yolov5n.txt ! videoconvert ! queue max-size-buffers=1 ! x264enc bitrate=5000 speed-preset=superfast tune=zerolatency ! video/x-h264, stream-format=byte-stream ! rtph264pay config-interval=1 ! application/x-rtp,media=video,encoding-name=H264,payload=127 ! sendrecv.

I suspect that the frames are being processed too slowly, causing a backlog that results in frame drops. I want to drop frames before they are processed by the infer element if it is still busy with the previous frame processing. How can I achieve this? Or is there any other solution?

p.s. I do 'gst_bin_recalculate_latency' when receive the message GST_MESSAGE_LATENCY and set pipeline latency to 250. But it doesn't help.


Solution

  • To drop the frames from queue, you can set leaky property of the queue.

    by default, its set to leaky=0 - doesn't drop frames.

    set leaky=1 or leaky=upstream to drop new frames

    set leaky=2 or leaky=downstream to drop old frames

    Now, in your pipeline, you have added queue after infer element. if you want to drop the frames before infer then add queue max-size-buffers=1 leaky=downstream before it.

    webrtcbin bundle-policy=max-bundle latency=0 name=sendrecv \
    v4l2src ! capsfilter caps="video/x-raw, width=1920, height=1080, framerate=25/1" ! videoconvert ! queue max-size-buffers=1 leaky=downstream ! infer use_npu=True type=yolov5 model=yolov5n.tflite labels=yolov5n.txt ! videoconvert ! queue ! x264enc bitrate=5000 speed-preset=superfast tune=zerolatency ! video/x-h264, stream-format=byte-stream ! rtph264pay config-interval=1 ! application/x-rtp,media=video,encoding-name=H264,payload=127 ! sendrecv.
    

    Although, The real problem I think is, yolo is working fast because of npu, but x264enc is taking longer and as a result the queue before x264enc gets full. But the infer element is probably not designed to stop processing when the queue after it is full and it keeps producing buffers for the queue which leads to lost frames situation.

    So dropping the frames before infer might not help you with this problem and you'd actually need to increase encoding speed (using hw accelerated encoder) or using leaky queue before x624enc element.

    webrtcbin bundle-policy=max-bundle latency=0 name=sendrecv \
    v4l2src ! capsfilter caps="video/x-raw, width=1920, height=1080, framerate=25/1" ! videoconvert ! queue ! infer use_npu=True type=yolov5 model=yolov5n.tflite labels=yolov5n.txt ! videoconvert ! queue max-size-buffers=1 leaky=downstream ! x264enc bitrate=5000 speed-preset=superfast tune=zerolatency ! video/x-h264, stream-format=byte-stream ! rtph264pay config-interval=1 ! application/x-rtp,media=video,encoding-name=H264,payload=127 ! sendrecv.