pythontensorflowkerasreinforcement-learning

Using Tensorflow Huber loss in Keras


I am trying to use huber loss in a keras model (writing DQN), but I am getting bad result, I think I am something doing wrong. My is code is below.

model = Sequential()
model.add(Dense(output_dim=64, activation='relu', input_dim=state_dim))
model.add(Dense(output_dim=number_of_actions, activation='linear'))
loss = tf.losses.huber_loss(delta=1.0)
model.compile(loss=loss, opt='sgd')
return model

Solution

  • I came here with the exact same question. The accepted answer uses logcosh which may have similar properties, but it isn't exactly Huber Loss. Here's how I implemented Huber Loss for Keras (note that I'm using Keras from Tensorflow 1.5).

    import numpy as np
    import tensorflow as tf
    
    '''
     ' Huber loss.
     ' https://jaromiru.com/2017/05/27/on-using-huber-loss-in-deep-q-learning/
     ' https://en.wikipedia.org/wiki/Huber_loss
    '''
    def huber_loss(y_true, y_pred, clip_delta=1.0):
      error = y_true - y_pred
      cond  = tf.keras.backend.abs(error) < clip_delta
    
      squared_loss = 0.5 * tf.keras.backend.square(error)
      linear_loss  = clip_delta * (tf.keras.backend.abs(error) - 0.5 * clip_delta)
    
      return tf.where(cond, squared_loss, linear_loss)
    
    '''
     ' Same as above but returns the mean loss.
    '''
    def huber_loss_mean(y_true, y_pred, clip_delta=1.0):
      return tf.keras.backend.mean(huber_loss(y_true, y_pred, clip_delta))
    

    Depending if you want to reduce the loss or the mean of the loss, use the corresponding function above.