pythonneural-networkkerasdeep-learningdropout

Adding Dropout to testing/inference phase


I've trained the following model for some timeseries in Keras:

    input_layer = Input(batch_shape=(56, 3864))
    first_layer = Dense(24, input_dim=28, activation='relu',
                        activity_regularizer=None,
                        kernel_regularizer=None)(input_layer)
    first_layer = Dropout(0.3)(first_layer)
    second_layer = Dense(12, activation='relu')(first_layer)
    second_layer = Dropout(0.3)(second_layer)
    out = Dense(56)(second_layer)
    model_1 = Model(input_layer, out)

Then I defined a new model with the trained layers of model_1 and added dropout layers with a different rate, drp, to it:

    input_2 = Input(batch_shape=(56, 3864))
    first_dense_layer = model_1.layers[1](input_2)
    first_dropout_layer = model_1.layers[2](first_dense_layer)
    new_dropout = Dropout(drp)(first_dropout_layer)
    snd_dense_layer = model_1.layers[3](new_dropout)
    snd_dropout_layer = model_1.layers[4](snd_dense_layer)
    new_dropout_2 = Dropout(drp)(snd_dropout_layer)
    output = model_1.layers[5](new_dropout_2)
    model_2 = Model(input_2, output)

Then I'm getting the prediction results of these two models as follow:

result_1 = model_1.predict(test_data, batch_size=56)
result_2 = model_2.predict(test_data, batch_size=56)

I was expecting to get completely different results because the second model has new dropout layers and theses two models are different (IMO), but that's not the case. Both are generating the same result. Why is that happening?


Solution

  • As I mentioned in the comments, the Dropout layer is turned off in inference phase (i.e. test mode), so when you use model.predict() the Dropout layers are not active. However, if you would like to have a model that uses Dropout both in training and inference phase, you can pass training argument when calling it, as suggested by François Chollet:

    # ...
    new_dropout = Dropout(drp)(first_dropout_layer, training=True)
    # ...
    

    Alternatively, If you have already trained your model and now want to use it in inference mode and keep the Dropout layers (and possibly other layers which have different behavior in training/inference phase such as BatchNormalization) active, you can define a backend function that takes the model's inputs as well as Keras learning phase:

    from keras import backend as K
    
    func = K.function(model.inputs + [K.learning_phase()], model.outputs)
    
    # to use it pass 1 to set the learning phase to training mode
    outputs = func([input_arrays] + [1.])