conv-neural-networknormalizationbias-neuron

Remove bias from the convolution if the convolution is followed by a normalization layer


enter image description here

def __init__(self):
    super().__init__()

    self.conv = nn.Sequential(
        nn.Conv2d(32, 64, kernel_size=5, stride=2),
        nn.BatchNorm2d(64),
        nn.ReLU(),

        nn.Conv2d(64, 64, kernel_size=3, stride=2),
        nn.BatchNorm2d(64),
        nn.ReLU(),
        
        nn.Conv2d(64, 64, kernel_size=3, stride=2),
        nn.BatchNorm2d(64),
        nn.ReLU(),

        nn.Conv2d(64, 64, kernel_size=3, stride=2),
        nn.BatchNorm2d(64),
        nn.ReLU(),

        nn.Conv2d(64, 64, kernel_size=3, stride=2),
        nn.BatchNorm2d(64),

        nn.AvgPool2d()
    )

    conv_out_size = self._get_conv_out((32, 110, 110))

    self.fc = nn.Sequential(
        nn.Linear(conv_out_size, 1),
        nn.Sigmoid(),
    )

I have this model where everything to my eyes is fine. However, It says that I have to remove bias from the convolution if the convolution is followed by a normalization layer, because it already contains a parameter for the bias. Can you explain why and how I can do that?


Solution

  • Batch normalization = gamma * normalize(x) + bias So, using bias in convolution layer and then again in batch normalization will cancel out the bias in the process of mean subtraction.
    You can just put bias = False in your convolution layer to ignore this conflict as the default value for bias is True in pytorch