dockergoogle-cloud-platformdeep-learninggoogle-dl-platform

How to get NVIDIA driver using a deeplearning-platform-release VM image?


I am running into the issue where I need to have NVIDIA driver installed.

I initially created a compute engine VM based on this:

export IMAGE_FAMILY="pytorch-latest-cu100"
export ZONE="us-west1-b"
export INSTANCE_NAME="my-instance"

gcloud compute instances create $INSTANCE_NAME \
  --zone=$ZONE \
  --image-family=$IMAGE_FAMILY \
  --image-project=deeplearning-platform-release \
  --maintenance-policy=TERMINATE \
  --accelerator="type=nvidia-tesla-v100,count=1" \
  --metadata="install-nvidia-driver=True"`

My code that's deployed on this VM works fine. Now I need to create a REST API layer over it, so according to this, I need to containerize the application using docker.

I tried building my docker image from: gcr.io/deeplearning-platform-release/pytorch-latest-cu100 (from the above command) but it seems this image doesn't exist.

then I tried building another image from: gcr.io/deeplearning-platform-release/pytorch-gpu.1-1

but now when I run my code, I get the following error:

Traceback (most recent call last):
  File "model.py", line 297, in run
    data = main(filepath)
  File "model.py", line 52, in main
    model = model.cuda()
  File "/root/miniconda3/lib/python3.7/site- 
packages/torch/nn/modules/module.py", line 260, in cuda
    return self._apply(lambda t: t.cuda(device))
  File "/root/miniconda3/lib/python3.7/site- 
packages/torch/nn/modules/module.py", line 187, in _apply
    module._apply(fn)
  File "/root/miniconda3/lib/python3.7/site- 
packages/torch/nn/modules/module.py", line 187, in _apply
module._apply(fn)
  File "/root/miniconda3/lib/python3.7/site- 
packages/torch/nn/modules/module.py", line 187, in _apply
    module._apply(fn)
  File "/root/miniconda3/lib/python3.7/site- 
 packages/torch/nn/modules/module.py", line 193, in _apply
    param.data = fn(param.data)
  File "/root/miniconda3/lib/python3.7/site- 
packages/torch/nn/modules/module.py", line 260, in <lambda>
    return self._apply(lambda t: t.cuda(device))
  File "/root/miniconda3/lib/python3.7/site- 
packages/torch/cuda/__init__.py", line 161, in _lazy_init
    _check_driver()
  File "/root/miniconda3/lib/python3.7/site- 
packages/torch/cuda/__init__.py", line 82, in _check_driver
    http://www.nvidia.com/Download/index.aspx""")
AssertionError: 
Found no NVIDIA driver on your system. Please check that you
have an NVIDIA GPU and installed a driver from
http://www.nvidia.com/Download/index.aspx

My Dockerfile:

   FROM gcr.io/deeplearning-platform-release/pytorch-gpu.1-1
   WORKDIR /app
   COPY requirements.txt /app
   RUN pip install --no-cache-dir -r requirements.txt
   EXPOSE 8080
   COPY . /app/
   CMD [ "python","main.py" ]

My main.py:

from flask import Flask, request

import model

app = Flask(__name__)

@app.route('/getduration', methods=['POST'])
def get_duration():
    try:
        data = request.args.get('param')
    except:
        data = None
    try:
        duration = model.run(data)
        return duration, 200
    except Exception as e:
        error = f"There was an error: {e}"
        return error, 500

if __name__ == '__main__':
    app.run(host='127.0.0.1', port=8080, debug=True)

How can I update my Dockerfile such that I can use Nvidia driver?


Solution

  • Are you using NVIDIA Docker? If not, that could be your problem. Use nvidia-docker exactly as you would docker, and it will make the NVIDIA drivers available inside your container.

    https://github.com/NVIDIA/nvidia-docker