I am trying to figure out what CNN architecture after every activation layers. Therefore, I have written a code to visualize some activation layers in my model. I used LeakyReLU as my activation layer. This is the figureLeakyRelu after Conv2d + BatchNorm
As can be seen from the figure, there are quite purple frames, which shows nothing. So my question is what does it mean. Does my model learn anything?
Generally speaking, activation layers (AL) don't learn. The purpose of AL is to add non-linearity into the model, hence they usually apply a certain, fixed, function regardless of the data, without adapting with the data. As an example:
I tried to simplify the math, so pardon my inaccuracies. As a closure, your purple frames are probably filters that didn't learn just yet, train the model to convergence and unless your model is highly bloated (too big for your data) your will see 'structures' in your filters.