I am currently using the Darts library to do some time series forecasting using an LSTM.
In my arguments I set the model to utilize my GPU and the output from the dashboard during training shows that my GPU is indeed being used.
Looking at the documentation as well confirms that what I am seeing means the GPU is being used for training. https://unit8co.github.io/darts/userguide/gpu_and_tpu_usage.html.
However, looking at my GPU in task manager it says that my GPU usage is at 0%. That does not make any sense to me? Does anyone know what goes on behind the scenes and might be able to explain why the GPU usage would be at 0% during training?
It might be that your GPU is under-used, e.g. because your CPU is not feeding it fast enough. Try playing with the num_loader_workers
parameter, increasing the batch size, and reading the recommendations provided here.