I'm building a image which requires testing GPU usability in the meantime. GPU containers runs well:
$ docker run --rm --runtime=nvidia nvidia/cuda:9.2-devel-ubuntu18.04 nvidia-smi
Wed Aug 7 07:53:25 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 396.54 Driver Version: 396.54 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 TITAN X (Pascal) Off | 00000000:04:00.0 Off | N/A |
| 24% 43C P8 17W / 250W | 2607MiB / 12196MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
but failed when building with GPU:
$ cat Dockerfile
FROM nvidia/cuda:9.2-devel-ubuntu18.04
RUN nvidia-smi
# RUN build something
# RUN tests require GPU
$ docker build .
Sending build context to Docker daemon 2.048kB
Step 1/2 : FROM nvidia/cuda:9.2-devel-ubuntu18.04
---> cdf6d16df818
Step 2/2 : RUN nvidia-smi
---> Running in 88f12f9dd7a5
/bin/sh: 1: nvidia-smi: not found
The command '/bin/sh -c nvidia-smi' returned a non-zero code: 127
I'm new to docker but I think we need sanity checks when building an image. So how could I build docker image with cuda runtime?
Configuring docker daemon with --default-runtime=nvidia
solved the problem.
Please refer to this wiki for more info.