pythondeep-learningneural-networkpytorchdropout

Pytorch: nn.Dropout vs. F.dropout


There are two ways to perform dropout:

I ask:

I don't see any performance difference when I switched them around.


Solution

  • The technical differences have already been shown in the other answer. However the main difference is that nn.Dropout is a torch Module itself which bears some convenience:

    A short example for illustration of some differences:

    import torch
    import torch.nn as nn
    
    class Model1(nn.Module):
        # Model 1 using functional dropout
        def __init__(self, p=0.0):
            super().__init__()
            self.p = p
    
        def forward(self, inputs):
            return nn.functional.dropout(inputs, p=self.p, training=True)
    
    class Model2(nn.Module):
        # Model 2 using dropout module
        def __init__(self, p=0.0):
            super().__init__()
            self.drop_layer = nn.Dropout(p=p)
    
        def forward(self, inputs):
            return self.drop_layer(inputs)
    model1 = Model1(p=0.5) # functional dropout 
    model2 = Model2(p=0.5) # dropout module
    
    # creating inputs
    inputs = torch.rand(10)
    # forwarding inputs in train mode
    print('Normal (train) model:')
    print('Model 1', model1(inputs))
    print('Model 2', model2(inputs))
    print()
    
    # switching to eval mode
    model1.eval()
    model2.eval()
    
    # forwarding inputs in evaluation mode
    print('Evaluation mode:')
    print('Model 1', model1(inputs))
    print('Model 2', model2(inputs))
    # show model summary
    print('Print summary:')
    print(model1)
    print(model2)
    

    Output:

    Normal (train) model:
    Model 1 tensor([ 1.5040,  0.0000,  0.0000,  0.8563,  0.0000,  0.0000,  1.5951,
             0.0000,  0.0000,  0.0946])
    Model 2 tensor([ 0.0000,  0.3713,  1.9303,  0.0000,  0.0000,  0.3574,  0.0000,
             1.1273,  1.5818,  0.0946])
    
    Evaluation mode:
    Model 1 tensor([ 0.0000,  0.3713,  0.0000,  0.0000,  0.0000,  0.0000,  0.0000,
             0.0000,  0.0000,  0.0000])
    Model 2 tensor([ 0.7520,  0.1857,  0.9651,  0.4281,  0.7883,  0.1787,  0.7975,
             0.5636,  0.7909,  0.0473])
    Print summary:
    Model1()
    Model2(
      (drop_layer): Dropout(p=0.5)
    )
    

    So which should I use?

    Both are completely equivalent in terms of applying dropout and even though the differences in usage are not that big, there are some reasons to favour the nn.Dropout over nn.functional.dropout:

    Dropout is designed to be only applied during training, so when doing predictions or evaluation of the model you want dropout to be turned off.

    The dropout module nn.Dropout conveniently handles this and shuts dropout off as soon as your model enters evaluation mode, while the functional dropout does not care about the evaluation / prediction mode.

    Even though you can set functional dropout to training=False to turn it off, it is still not such a convenient solution like with nn.Dropout.

    Also the drop rate is stored in the module, so you don't have to save it in an extra variable. In larger networks you might want to create different dropout layers with different drop rates - here nn.Dropout may increase readability and can bear also some convenience when using the layers multiple times.

    Finally, all modules which are assigned to your model are registered in your model. So you model class keeps track of them, that is why you can just turn off the dropout module by calling eval(). When using the functional dropout your model is not aware of it, thus it won't appear in any summary.