pythonpytorch

PyTorch. Optimizer doesn't work and "RuntimeError: element 0 of tensors..."


I get error RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn.

If I use something like loss_fin.requires_grad = True, code runs but optimizer doesn't work.

def loss(x1):
 module = torch.abs(x1 - 1.0)
 return module.mean()

w=torch.rand(3,4)
BH=torch.rand(3)
WH=torch.rand(3)
BO=torch.rand(1)

optimizer = torch.optim.SGD([w,BH, WH, BO], lr = 0.1)
for n in range(100):
  optimizer.zero_grad()
  input_for_hidden = torch.matmul(w,x)
  inactivated_at_hidden_layer = input_for_hidden+BH
  output_hidden_layer = torch.sigmoid(inactivated_at_hidden_layer)
  input_for_out_layer = torch.matmul(output_hidden_layer,WH)
  inactivated_output = input_for_out_layer+BO
  output_for_out_layer = torch.sigmoid(inactivated_output)

  loss_fin = loss(output_for_out_layer)
  loss_fin.backward()
  optimizer.step()

I'm a newbie. I would be grateful if someone could explain in simple terms why it doesn't work


Solution

  • All tensors passed to the optimizer need to have requires_grad=True

    w = torch.rand(3, 4, requires_grad=True)
    BH = torch.rand(3, requires_grad=True)
    WH = torch.rand(3, requires_grad=True)
    BO = torch.rand(1, requires_grad=True)
    
    optimizer = torch.optim.SGD([w,BH, WH, BO], lr = 0.1)