Abstract—In this article, we propose a transfer learning method for the deep neural network (DNN). Deep learning has been widely used in many applications such as image classification and object detection. However, it is hard to apply deep learning methods when we cannot get a large amount of training data. To tackle this problem, we propose a new method that re-uses all parameters of the DNN trained on source images. Our proposed method first trains the DNN to solve the source task. Second, we evaluate the relation between the source labels and the target ones. To evaluate the relation, we use the output values of the DNN when we input the target images to the DNN trained on the source images. Then, we compute the probabilities of each target label by vetting the output values. After computing the probabilities, we select the output variables of the peaks of each probability as the most related source label. Then, we tune all parameters in such a way that the selected variables respond as the outputs variables of the target labels. Experimental results by using the MNIST (source) and the X-ray CT images (target) show that our proposed method can improve classification performance.
Index Terms—Deep learning, deep neural network, deep Boltzmann machine, stacked autoencoders, transfer learning, computer aided diagnosis.
The authors are with Panasonic Corporation, Japan (e-mail: firstname.lastname@example.org, email@example.com).
Cite: Yoshihide Sawada and Kazuki Kozuka, "Whole Layers Transfer Learning of Deep Neural Networks for a Small Scale Dataset," International Journal of Machine Learning and Computing vol.6, no. 1, pp. 27-31, 2016.