multithreadingdockerflaskyoloonnx

Handling Multiple ONNX Runtime Sessions Sequentially in Docker


I have a Flask-based API for running computer vision models (YOLO and classifiers) using ONNX Runtime. The models, originally trained in PyTorch, are converted to ONNX format. In the local environment, the system performs well !, allowing for the sequential loading and inference of different ONNX models. However, when deployed in Docker, we observe that only the first ONNX model loaded is available for inference, and additional inference sessions cannot be initiated concurrently.

The process flow involves:

  1. Loading the YOLO model in ONNX Runtime for initial inference.

  2. Cropping images based on YOLO output.

  3. Sending cropped images to various classifiers (also ONNX models) sequentially.

YOLO Model Expectations: The YOLO (You Only Look Once) model used is designed to work with input images of size 640x640 pixels.

Classifier Model Expectations: the classifier models, however, are expecting input images of size 224x224 pixels.

I suspect this might be a resource allocation or session management issue within the Docker environment. The primary question is whether implementing multi-threading within the Docker container could resolve this, and if so, how to approach this.

# Flask app initialization and route definition
# ...
@app.route("/predict", methods=["POST"])
def predict():
    # ...
    # Step 1: YOLO model to detect boxes
    yolo_response = yolo_predict(image_np)
    # ...
    for box in boxes:
        # Sequential processing of classifiers
        stonetype_result = stonetype_predict(resized_image)
        cut_result = cut_predict(resized_image)
        color_result = color_predict(resized_image)
        # ...
    return jsonify(results)
# ...

The models are being loaded for inference using :

def load_model(onnx_file_path):
    """Load the ONNX model."""
    session = ort.InferenceSession(onnx_file_path)
    return session

def infer(session, image_tensor):
    """Run model inference."""
    input_name = session.get_inputs()[0].name
    output = session.run(None, {input_name: image_tensor})
    return output

Expected Behavior:

Each model (YOLO and subsequent classifiers) should be loaded and run independently in their respective ONNX Runtime sessions within the Docker environment, similar to the local setup.

Observed Behavior:

Only the first model (YOLO) loaded in ONNX Runtime is available for inference. Subsequent attempts to load additional models for inference within the same Docker session are unsuccessful.

Build script

# Use an official Python runtime as a parent image
FROM python:3.10-slim

# Set the working directory in the container
WORKDIR /usr/src/app

# Copy the current directory contents into the container at /usr/src/app
COPY . /usr/src/app

# Install any needed packages specified in requirements.txt
RUN pip install -r requirements.txt

# Make port 5000 available to the world outside this container
EXPOSE 5000

# Define environment variable
ENV MODEL_PATH /usr/src/app/services/yolo_service/yolo.onnx

# Run server.py when the container launches
CMD ["python", "server.py"]

Error / output :

[2024-01-29 13:03:20,899] ERROR in app: Exception on /predict [POST]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/flask/app.py", line 1463, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.10/site-packages/flask/app.py", line 872, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.10/site-packages/flask/app.py", line 870, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.10/site-packages/flask/app.py", line 855, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
File "/usr/src/app/server.py", line 38, in predict
stonetype_result = stonetype_predict(resized_image)
File "/usr/src/app/services/stonetype_service/app/server.py", line 35, in predict
output = sessionStoneType.run(None, {input_name: image_tensor})
File "/usr/local/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 220, in run
return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Got invalid dimensions for input: images for the following indices
index: 2 Got: 224 Expected: 640
index: 3 Got: 224 Expected: 640
Please fix either the inputs or the model.

Solution

  • Fix: To resolve this issue :

    Environment Variables in Dockerfile: I introduced environment variables for each model directly within the Dockerfile.