cudapytorch

How to make cuda unavailable in pytorch


i'm running some code with cudas, and I need to test the same code on CPU to compare running time. To decide between regular pytorch tensor and cuda float tensor, the library I use calls torch.cuda.is_available(). Is there an easy method to make this function return false? I tried changing the Cuda visible devices with

os.environ["CUDA_VISIBLE_DEVICES"]=""

but torch.cuda.is_available() still return True. I went into pytorch source code, and in my case, torch.cuda.is_avaible returns

torch._C._cuda_getDeviceCount() > 0

I assume I should be able to "hide" my GPU at the start of my notebook, so the device count is equal to zero, but i didn't get any success so far. Any help is appreciated :)


Solution

  • my code

    Instead of trying to trick it, why not rewrite your code? For example,

    use_gpu = torch.cuda.is_available() and not os.environ['USE_CPU']
    

    Then you can start your program as python runme.py to run on GPU if available, and USE_CPU=1 python3 runme.py to force CPU execution (or make it semi-permanent by export USE_CPU=1).

    I tried changing the Cuda visible devices with

    You can also try running your code with CUDA_VISIBLE_DEVICES="" python3 runme.py; if you're setting the environment variable inside your code, it might be set later than PyTorch initialisation, and have no effect.