import torch
from torch.autograd import Variable
x = Variable(torch.FloatTensor([11.2]), requires_grad=True)
y = 2 * x
print(x)
print(y)
print(x.data)
print(y.data)
print(x.grad_fn)
print(y.grad_fn)
y.backward() # Calculates the gradients
print(x.grad)
print(y.grad)
Error:
C:\Users\donhu\AppData\Local\Temp\ipykernel_9572\106071707.py:2: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at aten\src\ATen/core/TensorBody.h:485.)
print(y.grad)
Source code https://github.com/donhuvy/Deep-learning-with-PyTorch-video/blob/master/1.5.variables.ipynb
How to fix?
Call y.retain_grad()
before calling y.backward()
.
The reason is because by default PyTorch only populate .grad
for leaf variables (variables that aren't results of operations), which is x
in your example. To ensure .grad
is also populated for non-leaf variables like y
, you need to call their .retain_grad()
method.
Also worth noting that it's a warning rather than an error.