pythontensorflowmachine-learningprobability-theory

why explain logit as 'unscaled log probabililty' in sotfmax_cross_entropy_with_logits?


In the tensorflow documentation (softmax_cross_entropy_with_logits), they said "logits : unscaled log probablilty". What is 'log probability'? First, I understand that 'logits' is an 'output before normalization' or a 'score for class'.

logits = tf.matmul(X,W) + b
hypothesis = tf.nn.softmax(logits)

If I got [1.5, 2.4, 0,7] by tf.matmul(X,W) + b, then [1.5, 2.4, 0,7] is logits(score) and this was unscaled. I can understand it up to this stage. But, I can't understand why [1.5, 2.4, 0.7] is 'log probability'.


Solution

  • Thanks everyone!

    I found this post which provided a close answer to my question.