pythonpytorchtensorsoftmax

How can I reduce dimension of a tensor after using Softmax?


I got a tensor of scores (lets' call it logits_tensor) that has shape: (1910, 164, 33).

Taking a look at it, logits_tensor[0][0]:

tensor([-2.5916, -1.5290, -0.8218, -0.8882, -2.0961, -2.1064, -0.7842, -1.5200,
        -2.1324, -1.5561, -2.4731, -2.1933, -2.8489, -1.8257, -1.8033, -1.8771,
        -2.8365,  0.6690, -0.6895, -1.7054, -2.4862, -0.8104, -1.5395, -1.1351,
        -2.7154, -1.7646, -2.6595, -2.0591, -2.7554, -1.8661, -2.7512, -2.0655,
         5.7374])

Now, by applying a softmax

probs_tensor = torch.nn.functional.softmax(logits_tensor, dim=-1)

I obtain another tensor with the same dimensions that contain probabilities, probs_tensor[0][0]:

tensor([2.3554e-04, 6.8166e-04, 1.3825e-03, 1.2937e-03, 3.8660e-04, 3.8263e-04,
        1.4356e-03, 6.8778e-04, 3.7283e-04, 6.6341e-04, 2.6517e-04, 3.5078e-04,
        1.8211e-04, 5.0665e-04, 5.1810e-04, 4.8127e-04, 1.8438e-04, 6.1396e-03,
        1.5782e-03, 5.7138e-04, 2.6173e-04, 1.3984e-03, 6.7454e-04, 1.0107e-03,
        2.0812e-04, 5.3857e-04, 2.2009e-04, 4.0118e-04, 1.9996e-04, 4.8660e-04,
        2.0079e-04, 3.9860e-04, 9.7570e-01])

What I'd like to obtain is a tensor of shape 1910, 164) that contains the indices of the max probabilities (for each of 164 elements) shown above, like this:

precitions[0]
> tensor([32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32,  1, 17, 17, 17,
       17, 17, 17, 17, 17, 17, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32,
       32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32,  0,  0,  0,
        0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
        0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
        0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
        0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
        0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
        0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
        0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0]

Note that "32" is the index of the higher probability element in probs_tensor[0][0]. The same task can be achieved by using torch.argmax but I need the softmax step.


Solution

  • Indeed you can apply torch.argmax on the tensor:

    >>> logits_tensor = torch.rand(1910, 164, 33)
    >>> probs_tensor = logits_tensor.softmax(-1)
    
    >>> probs_tensor.argmax(-1).shape
    torch.Size([1910, 164])
    

    Do note, applying argmax on probs_tensor is identical to applying it on logits_tensor. The logit with the highest value will remain the logit with the highest probability mass.