pythonopencvasynchronousopenvino

OpenVino async inference for subset of images


I'm trying to implement async openvino inference for my service (that actually gets input images from a queue (using rabbitmq)).

Found an official tutorial https://docs.openvino.ai/2023.2/notebooks/115-async-api-with-output.html, but the implementation uses video stream here. I have difficulties in changing video input to image input in async_api method. So they are doing something like this (in short), where VideoPlayer is a standard cv2 videostream.

...
player = utils.VideoPlayer(source, flip=flip, fps=fps, skip_first_frames=skip_first_frames)
player.start()
frame = player.next()
curr_request.set_tensor(input_layer_ir, ov.Tensor(frame))
curr_request.start_async()
while True:
   next_frame = player.next()
   next_request.set_tensor(input_layer_ir, ov.Tensor(resized_frame))
   next_request.start_async()
   curr_request.wait()
...

So if I want to use some list of images or some consumer, what is a better approach to get the next frame?


Solution

  • OpenVINO™ does provide a Python sample that runs Image Classification using Asynchronous Inference Request API with image sources instead of video.

    Sample code for the Image Classification Async Python Sample is available from our OpenVINO repository.

    The sample supports image input via path to an image file(s) and runs the inference asynchronously. Furthermore, the input arguments for the image files is in list format, thus the easiest way would be to supply the image path list.