pythonopencvnvidia-deepstream

Save frames extracted from Deepstream pipeline to video using OpenCV


I’m trying to save extracted frames from a Deepstream pipeline to video with OpenCV but all I end up with is a 9KB file.

This is my code (executed inside a probe function):

batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list
frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
frame_copy = np.array(frame, copy=True, order='C')            
frame_copy = cv2.cvtColor(frame_copy, cv2.COLOR_RGBA2BGRA)

The above code is executed each time the probe function is invoked. Images are saved to a queue:

frame_buffer.put(frame_copy)

After the required number of frames has been pushed into the queue, I use below code to save the buffered frames to a video file:

codec = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter('out.avi', codec, fps, (output_width, output_height))
out.write(frame_copy)                
total_frames = FRAME_RECORDING_THRESH 

while total_frames > 0:                
   frame = frame_buffer.get() 
   frame = cv2.resize(frame, (output_width, output_height), interpolation = cv2.INTER_LINEAR)  
   out.write(frame)
   total_frames -= 1
                
out.release()

Unfortunately the file produced is not a valid video file. Is there sth I am doing wrong in the above process? Any help would be greatly appreciated.

P.S. Just to test that the frames have been correctly stored inside the queue, if I attempt to save the frames as images inside the while loop:

cv2.imwrite(dest_folder + '/' + f'tmp{total_frames}.png', frame)

I get properly saved and valid png images.

P.S. 2 Frames have a resolution of output_width, output_height at the time they are buffered. Also, trying to do a cv2.resize before they are saved doesn't change anything.


Solution

  • I can't test it but common mistake is that people think than code

    out = cv2.VideoWriter('out.avi', codec, fps, (output_width, output_height))
    

    will automatically resize frames to size (output_width, output_height) but it is not true.

    You have to manually resize frames.

    If you don't do this then Writer will skip frames and you get broken file - without frames.

    while total_frames > 0:                
       frame = frame_buffer.get() 
    
       frame = cv2(frame, (output_width, output_height))
    
       out.write(frame)
       total_frames -= 1
    

    EDIT:

    It seems problem can make image with transparent layer A - RGBA - because video don't use A.

    It needs to remove it.

    You can convert with cv2.COLOR_RGBA2BGR instead of cv2.COLOR_RGBA2BGRA

    frame_copy = cv2.cvtColor(frame_copy, cv2.COLOR_RGBA2BGR)
    

    Or you can remove last layer from numpy.array

    frame_copy = frame_copy[ : , : , :3 ]