multithreadingubuntupytorchhtop

Way too much resources are used with Pytorch


I am using pytorch to train a DQN model. With ubuntu, if I use htop, I get

enter image description here

As you can see, all resources are used and I am a bit worried about that. Here is my code.

Is there a way to use less resource? Do I have to add my requirement using pytorch?

Be aware that there's no GPUs on my machine, just CPUs


Solution

  • Yes, there is. You can use torch.set_num_threads(...) to specify the number of threads. Depending on the PyTorch version you use, maybe this function will not work correctly. See why in this issue. In there, you'll see that if needed you can use environment variables to limit OpenMP or MKL threads usage via OMP_NUM_THREADS=? and MKL_NUM_THREADS=? respectively, where ? is the number of threads.

    Keep in mind that these things are expected to run on GPUs with thousands of cores, so I would limit CPU usage only when extremely necessary.