tensorflowmachine-learningdeep-learningneural-networkloss-function

What is a loss function in simple words?


Can anyone please explain in simple words and possibly with some examples what is a loss function in the field of machine learning/neural networks?

This came out while I was following a Tensorflow tutorial: https://www.tensorflow.org/get_started/get_started


Solution

  • The loss function is how you're penalizing your output.

    The following example is for a supervised setting i.e. when you know the correct result should be. Although loss functions can be applied even in unsupervised settings.

    Suppose you have a model that always predicts 1. Just the scalar value 1.

    You can have many loss functions applied to this model. L2 is the euclidean distance.

    If I pass in some value say 2 and I want my model to learn the x**2 function then the result should be 4 (because 2*2 = 4). If we apply the L2 loss then its computed as ||4 - 1||^2 = 9.

    We can also make up our own loss function. We can say the loss function is always 10. So no matter what our model outputs the loss will be constant.

    Why do we care about loss functions? Well they determine how poorly the model did and in the context of backpropagation and neural networks. They also determine the gradients from the final layer to be propagated so the model can learn.

    As other comments have suggested I think you should start with basic material. Here's a good link to start off with http://neuralnetworksanddeeplearning.com/