I found the post here. Here, we try to find an equivalence of tf.nn.softmax_cross_entropy_with_logits
in PyTorch. The answer is still confusing to me.
Here is the Tensorflow 2
code
import tensorflow as tf
import numpy as np
# here we assume 2 batch size with 5 classes
preds = np.array([[.4, 0, 0, 0.6, 0], [.8, 0, 0, 0.2, 0]])
labels = np.array([[0, 0, 0, 1.0, 0], [1.0, 0, 0, 0, 0]])
tf_preds = tf.convert_to_tensor(preds, dtype=tf.float32)
tf_labels = tf.convert_to_tensor(labels, dtype=tf.float32)
loss = tf.nn.softmax_cross_entropy_with_logits(logits=tf_preds, labels=tf_labels)
It give me the loss
as
<tf.Tensor: shape=(2,), dtype=float32, numpy=array([1.2427604, 1.0636061], dtype=float32)>
Here is the PyTorch
code
import torch
import numpy as np
preds = np.array([[.4, 0, 0, 0.6, 0], [.8, 0, 0, 0.2, 0]])
labels = np.array([[0, 0, 0, 1.0, 0], [1.0, 0, 0, 0, 0]])
torch_preds = torch.tensor(preds).float()
torch_labels = torch.tensor(labels).float()
loss = torch.nn.functional.cross_entropy(torch_preds, torch_labels)
However, it raises:
RuntimeError: 1D target tensor expected, multi-target not supported
It seems that the problem is still unsolved. How to implement tf.nn.softmax_cross_entropy_with_logits
in PyTorch?
What about tf.nn.sigmoid_cross_entropy_with_logits
?
tf.nn.softmax_cross_entropy_with_logits
Edit: This is actually not equivalent to F.cross_entropy
. The latter can only handle the single-class classification setting. Not the more general case of multi-class classification, whereby the label can be comprised of multiple classes. Indeed, F.cross_entropy
takes a unique class id as target (per instance), not a probability distribution over classes as tf.nn.softmax_cross_entropy_with_logits
can expect to receive.
>>> logits = torch.tensor([[4.0, 2.0, 1.0], [0.0, 5.0, 1.0]])
>>> labels = torch.tensor([[1.0, 0.0, 0.0], [0.0, 0.8, 0.2]])
In order to get the desired result apply a log-softmax to your logits then take the negative log-likelihood:
>>> -torch.sum(F.log_softmax(logits, dim=1) * labels, dim=1)
tensor([0.1698, 0.8247])
tf.nn.sigmoid_cross_entropy_with_logits
For this one you can apply F.binary_cross_entropy_with_logits
.
>>> F.binary_cross_entropy_with_logits(logits, labels, reduction='none')
tensor([[0.0181, 2.1269, 1.3133],
[0.6931, 1.0067, 1.1133]])
It is equivalent to applying a sigmoid then the negative log-likelihood, considering each class as a binary classification task:
>>> labels*-torch.log(torch.sigmoid(logits)) + (1-labels)*-torch.log(1-torch.sigmoid(logits))
tensor([[0.0181, 2.1269, 1.3133],
[0.6931, 1.0067, 1.1133]])
having imported torch.nn.functional
as F
.