pythondeep-learningpytorchgpuresnet

resnet50.to() function on a non-NVIDIA GPU


I am trying to convert the pretrained ResNet50 model to be used on GPU using the Pytorch function resnet50.to().

The problem is that I am using an Intel Iris Plus Graphics 655 1536 MB GPU on Mac, and I don't know what argument to pass to the function as I only found the one for NVIDIA GPUs (resnet50.to('cuda:0')).


Solution

  • PyTorch uses Nvidia's CUDA API for all GPU interactions. Other GPUs that don't utilise the CUDA API (such as AMD or Intel GPUs) are therefore not supported.

    If you don't have an Nvidia GPU, you cannot run PyTorch on the GPU.