pythonpytorchderivativeautodiff

Getting gradient of vectorized function in pytorch


I am brand new to PyTorch and want to do what I assume is a very simple thing but am having a lot of difficulty.

I have the function sin(x) * cos(x) + x^2 and I want to get the derivative of that function at any point.

If I do this with one point it works perfectly as

x = torch.autograd.Variable(torch.Tensor([4]),requires_grad=True)
y = torch.sin(x)*torch.cos(x)+torch.pow(x,2)
y.backward()
print(x.grad) # outputs tensor([7.8545])

However, I want to be able to pass in a vector as x and for it to evaluate the derivative element-wise. For example:

Input: [4., 4., 4.,]
Output: tensor([7.8545, 7.8545, 7.8545])

But I can't seem to get this working.

I tried simply doing

x = torch.tensor([4., 4., 4., 4.], requires_grad=True)
out = torch.sin(x)*torch.cos(x)+x.pow(2)
out.backward()
print(x.grad)

But I get the error "RuntimeError: grad can be implicitly created only for scalar outputs"

How do I adjust this code for vectors?

Thanks in advance,


Solution

  • Here you can find relevant discussion about your error.

    In essence, when you call backward() without arguments it is implicitly converted to backward(torch.Tensor([1])), where torch.Tensor([1]) is the output value with respect to which gradients are calculated.

    If you pass 4 (or more) inputs, each needs a value with respect to which you calculate gradient. You can pass torch.ones_like explicitly to backward like this:

    import torch
    
    x = torch.tensor([4.0, 2.0, 1.5, 0.5], requires_grad=True)
    out = torch.sin(x) * torch.cos(x) + x.pow(2)
    # Pass tensor of ones, each for each item in x
    out.backward(torch.ones_like(x))
    print(x.grad)