pythonmachine-learningpytorchdropout

why nn.Dropout change the elements values of a tensor?


I have a problem using the dropout layer. In my understanding, the input to nn.Dropout can be a tensor and nn.dropout randomly makes some elements zero with a given probability.but, see my code:

dropout = nn.Dropout(0.1)
y = torch.tensor([5.0,7.0,9.0])
y = dropout(y)
print(y)

output is tensor([ 5.5556, 7.7778, 10.0000])

i tried many times and sometimes there is a zero element.But the other elements change every time to another fixed value(5.0 -> 5.5556, 7.0 -> 7.7778, 9.0 -> 10.000)

Why does this happen?


Solution

  • From the documentation, you can find this:

    Furthermore, the outputs are scaled by a factor of 1/(1-p) during training. This means that during evaluation the module simply computes an identity function.

    So your output is scaled by (1/(1-0.1)) in your example.

    For confirmation, below code returns the same output you observed:

    import torch
    
    y = torch.tensor([5.0,7.0,9.0])
    print(y*(1/(1-0.1))) # You used p=0.1
    
    >>> tensor([ 5.5556,  7.7778, 10.0000])
    

    Dropout layer works for training stage only. If dropout layer does not carry out this kind of scaling for inputs, the consequence would be like the network is fed with 'exaggerated' inputs compared with the inputs we used during training stage.

    I think you can also refer to posts like this.