machine-learningpytorchcudagpu

How can I enable CUDA in PyTorch for Nvidia GeForce RTX 3050 Ti?


I want to run the PyTorch library, which I am running in a virtual environment in PyCharm, on my graphics card, which is an Nvidia GeForce RTX 3050 Ti. However, it's running on the CPU, and whenever I use the command import torch and print("cuda is available:", torch.cuda.is_available()), it always returns False.

I have CUDA version 12.6 installed. I also installed PyTorch for CUDA version 12.4 because it was the latest version available on the PyTorch website. What should I install considering my graphics card type?


Solution

  • You need to download the version of Pytorch that is compatible with CUDA for that, head over to the website: https://pytorch.org/

    As shown in the picture

    Download the latest one (12.4) and Run the command that is shown in the image on your Command Terminal.

    Then Install Cuda same version (12.4) URL: https://developer.nvidia.com/cuda-12-4-0-download-archive?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exe_local

    Download Same Version for CUDA that is compatible with Pytorch

    Then check the commands,

    Nvidia-smi
    nvcc --version 
    

    Ensure the versions for Pytorch & CUDA are same, Hope this helps!