dockerdockerfilecudanvidianvidia-docker

Unable to use GPU in custom Docker container built on top of nvidia/cuda image despite --gpus all flag


I am trying to run a Docker container that requires access to my host NVIDIA GPU, using the --gpus all flag to enable GPU access. When I run the container with the nvidia-smi command, I can see an active GPU, indicating that the container has access to the GPU. However, when I simply try to run TensorFlow, PyTorch, or ONNX Runtime inside the container, these libraries do not seem to be able to detect or use the GPU.

Specifically, when I run the container with the following command, I see only the CPUExecutionProvider, but not the CUDAExecutionProvider in ONNX Runtime:

sudo docker run --gpus all mycontainer:latest

However, when I run the same container with the nvidia-smi command, I get the active GPU prompt:

sudo docker run --gpus all mycontainer:latest nvidia-smi

This is the active GPU prompt:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 495.29.05    Driver Version: 495.29.05    CUDA Version: 11.5     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  On   | 00000000:01:00.0 Off |                  N/A |
| N/A   44C    P0    27W /  N/A |     10MiB /  7982MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

And this is the Dockerfile, I built mycontainer with:

FROM nvidia/cuda:11.5.0-base-ubuntu20.04

WORKDIR /home

COPY requirements.txt /home/requirements.txt

# Add the deadsnakes PPA for Python 3.10
RUN apt-get update && \
    apt-get install -y software-properties-common libgl1-mesa-glx cmake protobuf-compiler && \
    add-apt-repository ppa:deadsnakes/ppa && \
    apt-get update

# Install Python 3.10 and dev packages
RUN apt-get update && \
    apt-get install -y python3.10 python3.10-dev python3-pip  && \
    rm -rf /var/lib/apt/lists/*

# Install virtualenv
RUN pip3 install virtualenv

# Create a virtual environment with Python 3.10
RUN virtualenv -p python3.10 venv

# Activate the virtual environment
ENV PATH="/home/venv/bin:$PATH"

# Install Python dependencies
RUN pip3 install --upgrade pip \
    && pip3 install --default-timeout=10000000 torch torchvision --extra-index-url https://download.pytorch.org/whl/cu116 \
    && pip3 install --default-timeout=10000000 -r requirements.txt

# Copy files
COPY /src /home/src

# Set the PYTHONPATH and LD_LIBRARY_PATH environment variable to include the CUDA libraries
ENV PYTHONPATH=/usr/local/cuda-11.5/lib64
ENV LD_LIBRARY_PATH=/usr/local/cuda-11.5/lib64

# Set the CUDA_PATH and CUDA_HOME environment variable to point to the CUDA installation directory
ENV CUDA_PATH=/usr/local/cuda-11.5
ENV CUDA_HOME=/usr/local/cuda-11.5

# Set the default command
CMD ["sh", "-c", ". /home/venv/bin/activate && python main.py $@"]

I have checked that the version of TensorFlow, PyTorch, and ONNX Runtime that I am using is compatible with the version of CUDA installed on my system. I have also made sure to set the LD_LIBRARY_PATH environment variable correctly to include the path to the CUDA libraries. Finally, I have made sure to include the --gpus all flag when starting the container, and to properly configure the NVIDIA Docker runtime and device plugin. Despite these steps, I am still unable to access the GPU inside the container when using TensorFlow, PyTorch, or ONNX Runtime. What could be causing this issue, and how can I resolve it? Please let me know, if you need further information.


Solution

  • You should install onnxruntime-gpu to get CUDAExecutionProvider.

    docker run --gpus all -it nvcr.io/nvidia/pytorch:22.12-py3 bash
    pip install onnxruntime-gpu
    python3 -c "import onnxruntime as rt; print(rt.get_device())"
    GPU