DC Field | Value | Language |
---|---|---|
dc.contributor.author | Park, Jung Guk | ko |
dc.contributor.author | Jo, Sung-Ho | ko |
dc.date.accessioned | 2019-09-17T02:20:07Z | - |
dc.date.available | 2019-09-17T02:20:07Z | - |
dc.date.created | 2018-12-11 | - |
dc.date.issued | 2019-09 | - |
dc.identifier.citation | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, v.30, no.9, pp.2866 - 2875 | - |
dc.identifier.issn | 2162-237X | - |
dc.identifier.uri | http://hdl.handle.net/10203/267482 | - |
dc.description.abstract | This paper determines the weight decay parameter value of a deep convolutional neural network (CNN) that yields a good generalization. To obtain such a CNN in practice, numerical trials with different weight decay values are needed. However, the larger the CNN architecture is, the higher is the computational cost of the trials. To address this problem, this paper formulates an analytical solution for the decay parameter through a proposed objective function in conjunction with Bayesian probability distributions. For computational efficiency, a novel method to approximate this solution is suggested. This method uses a small amount of information in the Hessian matrix. Theoretically, the approximate solution is guaranteed by a provable bound and is obtained by a proposed algorithm, where its time complexity is linear in terms of both the depth and width of the CNN. The hound provides a consistent result for the proposed learning scheme. By reducing the computational cast of determining the decay value, the approximation allows for the fast investigation of a deep CNN (DCNN) which yields a small generalization error. Experimental results show that our assumption verified with different DCNNs is suitable for real-world image data sets. In addition, the proposed method significantly reduces the time cost of learning with setting the weight decay parameter while achieving good classification performances. | - |
dc.language | English | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.title | Bayesian Weight Decay on Bounded Approximation for Deep Convolutional Neural Networks | - |
dc.type | Article | - |
dc.identifier.wosid | 000482589400024 | - |
dc.identifier.scopusid | 2-s2.0-85071483226 | - |
dc.type.rims | ART | - |
dc.citation.volume | 30 | - |
dc.citation.issue | 9 | - |
dc.citation.beginningpage | 2866 | - |
dc.citation.endingpage | 2875 | - |
dc.citation.publicationname | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS | - |
dc.identifier.doi | 10.1109/TNNLS.2018.2886995 | - |
dc.contributor.localauthor | Jo, Sung-Ho | - |
dc.description.isOpenAccess | N | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | Bayesian method | - |
dc.subject.keywordAuthor | convolutional neural networks (CNNs) | - |
dc.subject.keywordAuthor | inverse Hessian matrix | - |
dc.subject.keywordAuthor | regularization | - |
dc.subject.keywordAuthor | weight decay | - |
dc.subject.keywordPlus | MACHINE | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.