Bayesian Weight Decay on Bounded Approximation for Deep Convolutional Neural Networks

Cited 9 time in webofscience Cited 9 time in scopus
  • Hit : 425
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorPark, Jung Gukko
dc.contributor.authorJo, Sung-Hoko
dc.date.accessioned2019-09-17T02:20:07Z-
dc.date.available2019-09-17T02:20:07Z-
dc.date.created2018-12-11-
dc.date.issued2019-09-
dc.identifier.citationIEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, v.30, no.9, pp.2866 - 2875-
dc.identifier.issn2162-237X-
dc.identifier.urihttp://hdl.handle.net/10203/267482-
dc.description.abstractThis paper determines the weight decay parameter value of a deep convolutional neural network (CNN) that yields a good generalization. To obtain such a CNN in practice, numerical trials with different weight decay values are needed. However, the larger the CNN architecture is, the higher is the computational cost of the trials. To address this problem, this paper formulates an analytical solution for the decay parameter through a proposed objective function in conjunction with Bayesian probability distributions. For computational efficiency, a novel method to approximate this solution is suggested. This method uses a small amount of information in the Hessian matrix. Theoretically, the approximate solution is guaranteed by a provable bound and is obtained by a proposed algorithm, where its time complexity is linear in terms of both the depth and width of the CNN. The hound provides a consistent result for the proposed learning scheme. By reducing the computational cast of determining the decay value, the approximation allows for the fast investigation of a deep CNN (DCNN) which yields a small generalization error. Experimental results show that our assumption verified with different DCNNs is suitable for real-world image data sets. In addition, the proposed method significantly reduces the time cost of learning with setting the weight decay parameter while achieving good classification performances.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleBayesian Weight Decay on Bounded Approximation for Deep Convolutional Neural Networks-
dc.typeArticle-
dc.identifier.wosid000482589400024-
dc.identifier.scopusid2-s2.0-85071483226-
dc.type.rimsART-
dc.citation.volume30-
dc.citation.issue9-
dc.citation.beginningpage2866-
dc.citation.endingpage2875-
dc.citation.publicationnameIEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS-
dc.identifier.doi10.1109/TNNLS.2018.2886995-
dc.contributor.localauthorJo, Sung-Ho-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorBayesian method-
dc.subject.keywordAuthorconvolutional neural networks (CNNs)-
dc.subject.keywordAuthorinverse Hessian matrix-
dc.subject.keywordAuthorregularization-
dc.subject.keywordAuthorweight decay-
dc.subject.keywordPlusMACHINE-
Appears in Collection
CS-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 9 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0