What is the difference between torch.tensor
and torch.Tensor
? What was the reasoning for providing these two very similar and confusing alternatives?
In PyTorch torch.Tensor
is the main tensor class. So all tensors are just instances of torch.Tensor
.
When you call torch.Tensor()
you will get an empty tensor without any data
.
In contrast torch.tensor
is a function which returns a tensor. In the documentation it says:
torch.tensor(data, dtype=None, device=None, requires_grad=False) → Tensor
Constructs a tensor with
data
.
tensor_without_data = torch.Tensor()
But on the other side:
tensor_without_data = torch.tensor()
Will lead to an error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-12-ebc3ceaa76d2> in <module>()
----> 1 torch.tensor()
TypeError: tensor() missing 1 required positional arguments: "data"
Similar behaviour for creating a tensor without data
like with: torch.Tensor()
can be achieved using:
torch.tensor(())
Output:
tensor([])