I've been following this tutorial from google coral on retraining an object detection model in docker, and it explicitly states that this is for CPU training only, which is very slow.
Is there an easy way to port this docker container to utilize the GPU (nvidia GTX 1080). I have installed nvidia-docker2, and successfully gotten my gpu passed into other containers, and as far as I know, also this one, using the --gpus all
tag. The nvidia-smi
command works from within my container, so I am almost certain that my GPU has been passed through successfully, however it is not used when training the model.
CUDA version is 11.4 according to nvidia-smi, both inside and outside of the container, and I am using Ubuntu 20.04.
Answering myself to close the question as I see no way to do it on a comment, solution was a comment from sebastian-sz:
"tensorflow/tensorflow:1.15.5 is cpu only image, you should use tensorflow/tensorflow:1.15.5-gpu to use CUDA. – sebastian-sz Jan 21 at 14:36"
Thank you for your help.