tensorflowpytorch

How to setup TF and Torch on one virtual environment with same CUDA


I want to setup tensorflow and pytorch on one virtual environment with same CUDA. However, I cannot find a CUDA version that can support both tensorflow and pytorch: For tensorflow 2.10, I selected CUDA 11.2. But I didn't find this CUDA version in the list for supporting pyTorch. I can only find the CUDA 11.1 in the list for pyTorch. Detailed information is listed below.

  1. To find CUDA version for Tensorflow https://www.tensorflow.org/install/source_windows#tested_build_configurations enter image description here

  2. To find CUDA version for PyTorch https://elenacliu-pytorch-cuda-driver.streamlit.app/ enter image description here

Will there be any problems if I install 2 different CUDA versions if I want to run the codes with GPU card? For example, after I create a virtual environemt by "conda create --name myenv python=3.10", I want to run codes with tensorflow for project 1, and codes with pyTorch for project 2.

Do I need to modify the "CUDA_PATH" in system variable every time before I ran the codes. i.e., set CUDA_PATH for CUDA 11.1 when I need to use PyTorch, and set CUDA_PATH for CUDA 11.2 when I need to use Tensorflow?

I find there is an option of installing CUDA 11.0, which is compatible with TF-2.4 and PyTorch-1.7. But there is a problem that it does not support CUDA capability SM_86. Will it be a problem of losing access to new features? enter image description here


Solution

  • There are no pre-built binaries of pytorch with cuda-11.2 indeed. If you necessarily want to go with this version of cuda, you have two choices I think:

    I'm basically repeating what is said on this pytorch thread, you can read it for more details

    I would not try to have multiple versions of cuda and manually "hotswap" them by tinkering with the cuda paths. (Opinion here) From experience, it can work but is also very error prone and will lead to problems eventually