Dropout neural networks using empirical Bayes경험적 베이즈 방법을 이용한 드롭아웃 뉴럴 네트워크

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 657
  • Download : 0
Regularization is one of the important things when we deal with deep neural networks due to their highly flexibilities to fit the large scale data with huge number of parameters in various domain. Dropout is one of the most popular regularization techniques often used in this field. Bayesian neural networks have theoretically grounded benefits of regularization in the perspective of marginalizing the distribution of parameters instead of model averaging. But the intractable integrations and sampling costs are obstacles in the Bayesian approach. However, recent studies proposed probabilistic based efficient variational inference methods. In this paper, we propose a dropout neural network using Bayesian approach which learns dropout rates adaptively to the data. Also we propose alternating algorithm using empirical Bayes recursive scheme, which enforce to learn hyper-parameters from the validation data. The algorithm avoids over-fitting to the training data and improves performance further. We report the comparisons of the performances to other dropout methods.
Advisors
Kim, Kee Eungresearcher김기응researcher
Description
한국과학기술원 :전산학부,
Publisher
한국과학기술원
Issue Date
2017
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전산학부, 2017.8,[iii, 25 p. :]

Keywords

Dropout neural network▼avariational inference▼aempirical Bayes; 드롭아웃 뉴럴 네트워크▼a변분 추론▼a경험적 베이즈

URI
http://hdl.handle.net/10203/243454
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=718734&flag=dissertation
Appears in Collection
CS-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0