numpypytorchcasting

Does PyTorch tensor type cast preserve information?


Consider the following simple operation,

>>> t
tensor([ 1.8750, -0.6875, -1.1250, -1.3750,  1.3750, -1.1250,  0.4688, -0.4062,
         0.8750, -1.7500], dtype=torch.float)
>>> t.view(torch.uint8)
tensor([ 63, 179, 185, 187,  59, 185,  47, 173,  54, 190], dtype=torch.uint8)
>>> t.view(torch.uint8).shape
torch.Size([10])
>>> t.view(torch.uint8).numpy()
array([ 63, 179, 185, 187,  59, 185,  47, 173,  54, 190], dtype=uint8)
>>> torch.as_tensor(t.view(torch.uint8).numpy())
tensor([ 63, 179, 185, 187,  59, 185,  47, 173,  54, 190], dtype=torch.uint8)
>>> torch.as_tensor(t.view(torch.uint8).numpy()).view(torch.float)
tensor([ 1.8750, -0.6875, -1.1250, -1.3750,  1.3750, -1.1250,  0.4688, -0.4062,
         0.8750, -1.7500], dtype=torch.float)

I am confused about how information is preserved across typecast conversions. The original tensor is of type float8, which is then converted to uint8 (0-255). The uint8 numpy array is then used to initialize a float8 tensor. Shouldn't this order of conversions result in loss of information?


Solution

  • All the operations you are using, namely view, numpy and as_tensor affects only the "metadata" of the tensors, i.e how they should interpret the data that is stored inside. None of these actually change any bit in the underlying array of numbers (which can be interpreted however you want).

    You can check in the documentations for these 3 operations (numpy, view, as_tensor) that they mention sharing the storage/data (with some excceptions that don't apply to your code, like gpu data etc)

    So when you go full circle, no a single bit has changed and thus you can perfectly recover the initial tensor