pythonmemory-leakspytorch

Pytorch GPU memory keeps increasing with every batch


I'm training a CNN model on images. Initially, I was training on image patches of size (256, 256) and everything was fine. Then I changed my dataloader to load full HD images (1080, 1920) and I was cropping the images after some processing. In this case, the GPU memory keeps increasing with every batch. Why is this happening?

PS: While tracking losses, I'm doing loss.detach().item() so that loss is not retained in the graph.


Solution

  • As suggested here, deleting the input, output and loss data helped.

    Additionally, I had the data as a dictionary. Just deleting the dictionary isn't sufficient. I had to iterate over the dict elements and delete all of them.