pythonpytorch

What is the difference between torch.tensor and torch.Tensor?


What is the difference between torch.tensor and torch.Tensor? What was the reasoning for providing these two very similar and confusing alternatives?


Solution

  • In PyTorch torch.Tensor is the main tensor class. So all tensors are just instances of torch.Tensor.

    When you call torch.Tensor() you will get an empty tensor without any data.

    In contrast torch.tensor is a function which returns a tensor. In the documentation it says:

    torch.tensor(data, dtype=None, device=None, requires_grad=False) → Tensor
    

    Constructs a tensor with data.


    This also explains why it is no problem creating an empty tensor instance of `torch.Tensor` without `data` by calling:
    tensor_without_data = torch.Tensor()
    

    But on the other side:

    tensor_without_data = torch.tensor()
    

    Will lead to an error:

    ---------------------------------------------------------------------------
    TypeError                                 Traceback (most recent call last)
    <ipython-input-12-ebc3ceaa76d2> in <module>()
    ----> 1 torch.tensor()
    
    TypeError: tensor() missing 1 required positional arguments: "data"
    

    But in general there is no reason to choose `torch.Tensor` over `torch.tensor`. Also `torch.Tensor` lacks a docstring.

    Similar behaviour for creating a tensor without data like with: torch.Tensor() can be achieved using:

    torch.tensor(())
    

    Output:

    tensor([])