tensorflowmachine-learningdeep-learningneural-networktransfer-learning

What is freezing/unfreezing a layer in neural networks?


I have been playing around with neural networks for quite a while now, and recently came across the terms freezing & unfreezing the layers before training a neural network while reading about transfer learning & am struggling with understanding their usage.


Solution

  • I would just add to the other answer that this is most commonly used with CNNs and the amount of layers that you want to freeze (not train) is "given" by the amount of similarity between the task that you are solving and the original one (the one that the original network is solving).

    If the tasks are very similar, let's say that you are using CNN pretrained on imagenet and you just want to add some other "general" objects that the network should recognize then you might get away with training just the dense top of the network.

    The more dissimilar the tasks are, the more layers of the original network you will need to unfreeze during the training.