Transfer learning or inductive transfer is a research problem in ML that focuses on storing knowledge gained while solving one problem and applying it to a different but related problem. For example, knowledge gained while learning to recognize Tigers could apply when trying to recognize Lions.
There are two major transfer learning scenarios in Neural network :
- Finetuning the convnet: Instead of random initializaion, process initialize the network with a pretrained network.
- ConvNet as fixed feature extractor: Here, we will freeze the weights for all of the network except that of the final fully connected layer. This last fully connected layer is replaced with a new one with random weights and only this layer is trained.
Using: Pytorch
Using: Tensorflow
Using: GIT