Controlled Dropout: a Different Dropout for Improving Training Speed on Deep Neural Network

Cited 13 time in webofscience Cited 0 time in scopus
  • Hit : 188
  • Download : 0
Dropout is a technique widely used for preventing overfitting while training deep neural networks. However, applying dropout to a neural network typically increases the training time. This paper proposes a different dropout approach called controlled dropout that improves training speed by dropping units in a column-wise or row-wise manner on the matrices. In controlled dropout, a network is trained using compressed matrices of smaller size, which results in notable improvement of training speed. In the experiment on feed-forward neural networks for MNIST data set and convolutional neural networks for CIFAR-10 and SVHN data sets, our proposed method achieves faster training speed than conventional methods both on CPU and GPU, while exhibiting the same regularization performance as conventional dropout. Moreover, the improvement of training speed increases when the number of fully-connected layers increases. As the training process of neural network is an iterative process comprising forward propagation and backpropagation, speed improvement using controlled dropout would provide a significantly decreased training time.
Publisher
IEEE
Issue Date
2017-10-06
Language
English
Citation

IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp.972 - 977

ISSN
1062-922X
DOI
10.1109/SMC.2017.8122736
URI
http://hdl.handle.net/10203/237933
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 13 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0