Controlled Dropout for Improvement of Speed and Memory Efficiency in Deep Neural Network

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 611
  • Download : 59
Deep neural networks (DNNs), which show out-standing performance in various areas, consume considerable amounts of memory and time during training. Our research led us to propose a controlled dropout technique with the potential of reducing the memory space and training time of DNNs. Dropout is a popular algorithm that solves the overfitting problem of DNNs by randomly dropping units in the training process. The proposed controlled dropout intentionally chooses which unitsn to drop compared to conventional dropout, thereby possibly facilitating a reduction in training time and memory usage. In this paper, we focus on validating whether controlled dropout can replace the traditional dropout technique to enable us to further our research aimed at improving the training speed and memory efficiency. A performance comparison between controlled dropout and traditional dropout is carried out by implementing an image classification experiment on data comprising handwritten digits from the MNIST dataset (Mixed National Institute of Standards and Technology dataset). The experimental results show that the proposed controlled dropout is as effective as traditional dropout. Furthermore, the experimental result implies that controlled dropout is more efficient when an appropriate dropout rate and number of hidden layers are used.
Publisher
Korean Institute of Information Scientists and Engineers
Issue Date
2017-02-15
Language
English
Citation

The 4th IEEE International Conference on Big Data and Smart Computing (BigComp2017)

URI
http://hdl.handle.net/10203/222509
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0