pytorchubuntu-18.04mambatorchaudio

Installing Torchaudio for PyTorch 1.10.0 with CUDA 11.0


On my Ubuntu 18.04 machine I have a virtual environment that contains pytorch=1.10.0=cuda110py38hf84197b_0. My CUDA version is 11.0, which I've checked by running nvidia-smi. I would like to install torchaudio.

I've attempted this using mamba install torchaudio=0.10.0 -c pytorch. However, this tries to upgrade the cuda build of my pytorch from 11.0 to 11.2. Similarly, if I try installing torchaudio=0.9.1, mamba wants to downgrade my pytorch version from 1.10.0 to 1.9.1.

This is an old machine with several-year-old code that relies on these old packages. Ideally, I would be able to install torchaudio with modifying as few packages/cuda drivers as possible. Is there a way to install torchaudio for pytorch 1.10.0 cuda build 11.0? I've checked the official pytorch releases page and it seems like this option isn't even listed (so maybe I've already answered my own question, but I'm trying here as well).

For reference, here is a basic environment containing my pytorch-necessary packages. Thank you.

name: torchbase
channels:
  - pytorch
  - conda-forge
  - defaults
dependencies:
  - _libgcc_mutex=0.1=conda_forge
  - _openmp_mutex=4.5=1_llvm
  - bzip2=1.0.8=h7f98852_4
  - ca-certificates=2023.11.17=hbcca054_0
  - cffi=1.15.0=py38h3931269_0
  - cudatoolkit=11.0.3=h15472ef_9
  - cudnn=8.2.1.32=h86fa8c9_0
  - future=0.18.2=py38h578d9bd_4
  - icu=68.2=h9c3ff4c_0
  - ld_impl_linux-64=2.36.1=hea4e1c9_2
  - libblas=3.9.0=12_linux64_mkl
  - libcblas=3.9.0=12_linux64_mkl
  - libffi=3.4.2=h7f98852_5
  - libgcc-ng=11.2.0=h1d223b6_11
  - libiconv=1.16=h516909a_0
  - liblapack=3.9.0=12_linux64_mkl
  - libnsl=2.0.0=h7f98852_0
  - libprotobuf=3.18.1=h780b84a_0
  - libstdcxx-ng=11.2.0=he4da1e4_11
  - libuuid=2.32.1=h7f98852_1000
  - libxml2=2.9.12=h72842e0_0
  - libzlib=1.2.11=h36c2ea0_1013
  - llvm-openmp=12.0.1=h4bd325d_1
  - magma=2.5.4=h4a2bb80_2
  - mkl=2021.4.0=h8d4b97c_729
  - nccl=2.11.4.1=h96e36e3_0
  - ncurses=6.2=h58526e2_4
  - ninja=1.10.2=h4bd325d_1
  - numpy=1.22.3=py38h99721a1_2
  - openssl=1.1.1o=h166bdaf_0
  - pip=23.3.2=pyhd8ed1ab_0
  - pycparser=2.21=pyhd8ed1ab_0
  - python=3.8.12=hb7a2778_2_cpython
  - python_abi=3.8=2_cp38
  - pytorch=1.10.0=cuda110py38hf84197b_0
  - pytorch-gpu=1.10.0=cuda110py38h5b0ac8e_0
  - readline=8.1=h46c0cb4_0
  - setuptools=60.1.1=py38h578d9bd_0
  - sleef=3.5.1=h9b69904_2
  - tbb=2021.5.0=h4bd325d_0
  - tk=8.6.11=h27826a3_1
  - typing_extensions=4.0.1=pyha770c72_0
  - wheel=0.37.1=pyhd8ed1ab_0
  - xz=5.2.5=h516909a_1
  - zlib=1.2.11=h36c2ea0_1013
  - zstd=1.5.1=ha95c52a_0

Solution

  • Yes, you are right, unfortunately, there is no such an option.

    I think upgrading pytorch-CUDA from 11.0 to 11.2 is less critical then downgrading the pytorch version. In the first case it is minor version change 11.0 -> 11.2, in a last - it's (almost) major: 1.10.0 -> 1.9.1