pythonpytorchneural-networktorchcross-entropy

PyTorch - RuntimeError: Expected floating point type for target with class probabilities, got Long


I use this code:

import torch.nn as nn

loss_fn=nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.00001)

loss = loss_fn(preds,labels) # Error

Error:

in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing)
   2844     if size_average is not None or reduce is not None:
   2845         reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2846     return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
   2847 
   2848 

RuntimeError: Expected floating point type for target with class probabilities, got Long

Solution

  • RuntimeError: Expected floating point type for target with class probabilities, got Long

    Error is pretty clear. You need to convert dtype of your target tensors to float. This has to do with what loss function you are using. Since you chose CE loss, you would end up with probabilities. And these probabilities are naturally float numbers. This means your targets should also be floats. For example, you may have a target tensor of a= [1, 0, 0, 1] You need to convert it to [1.0 , 0.0 , 0.0 , 1.0]

    You can use this table below to inspect all the types.

    ╔══════════════════════════╦═══════════════════════════════╦════════════════════╦═════════════════════════╗
    ║        Data type         ║             dtype             ║     CPU tensor     ║       GPU tensor        ║
    ╠══════════════════════════╬═══════════════════════════════╬════════════════════╬═════════════════════════╣
    ║ 32-bit floating point    ║ torch.float32 or torch.float  ║ torch.FloatTensor  ║ torch.cuda.FloatTensor  ║
    ║ 64-bit floating point    ║ torch.float64 or torch.double ║ torch.DoubleTensor ║ torch.cuda.DoubleTensor ║
    ║ 16-bit floating point    ║ torch.float16 or torch.half   ║ torch.HalfTensor   ║ torch.cuda.HalfTensor   ║
    ║ 8-bit integer (unsigned) ║ torch.uint8                   ║ torch.ByteTensor   ║ torch.cuda.ByteTensor   ║
    ║ 8-bit integer (signed)   ║ torch.int8                    ║ torch.CharTensor   ║ torch.cuda.CharTensor   ║
    ║ 16-bit integer (signed)  ║ torch.int16 or torch.short    ║ torch.ShortTensor  ║ torch.cuda.ShortTensor  ║
    ║ 32-bit integer (signed)  ║ torch.int32 or torch.int      ║ torch.IntTensor    ║ torch.cuda.IntTensor    ║
    ║ 64-bit integer (signed)  ║ torch.int64 or torch.long     ║ torch.LongTensor   ║ torch.cuda.LongTensor   ║
    ║ Boolean                  ║ torch.bool                    ║ torch.BoolTensor   ║ torch.cuda.BoolTensor   ║
    ╚══════════════════════════╩═══════════════════════════════╩════════════════════╩═════════════════════════╝
    

    And for casting tensor to another dtype you can use something like

    sample_tensor=sample_tensor.type(torch.FloatTensor) 
    

    or

    sample_tensor=sample_tensor.to(torch.float )
    

    (I am not sure if reassigning the tensors is necessary)