Performance improvement of densely connected convolutional neural network by using exponentially increasing feature dimension지수적인 차원의 증가를 통한 Densely connected convolutional neural network에서의 성능 향상

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 277
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorRo, Yong Man-
dc.contributor.advisor노용만-
dc.contributor.authorHam, Seong-Wook-
dc.date.accessioned2019-09-04T02:41:14Z-
dc.date.available2019-09-04T02:41:14Z-
dc.date.issued2018-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=734069&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/266761-
dc.description학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2018.2,[iii, 26 p. :]-
dc.description.abstractObject recognition and classification is a basic and essential part of computer vision. As deep convolutional neural networks emerged, there was huge success in object recognition and classification tasks. However, classification in datasets with many classes (over 100 classes) is still challenging. Examples include ImageNet 1K dataset and Cifar100. To improve performance in datasets with many classes, many approaches have been proposed. Regularization techniques[11,12,13,14], structural approaches[1,2,3,4,5,6,7], data augmentation[15, 16], and new activation functions[17,18,19] have been proposed. In this paper I take a structural approach. In recent years many structures have been proposed: LeNet[6], AlexNet[7], GoogleNet[5], Residual Networks(ResNet)[3] Pyramidal Residual Networks(Pyramid ResNet)[1] and Densely Connected Convolutional Neural Networks(DenseNet)[2]. Among these structures, Pyramid ResNet and DenseNet are the current state of the art. In particular, the Densely Connected Convolutional Network is an efficient structure in number of parameters and computational cost. Though DenseNet is very efficient in parameters and computation complexity, recently it has become known that there exist inefficiencies. In this paper I focus on inefficiencies in DenseNet. I investigate problems in linearly increasing input feature dimensions in DenseNet. To solve this problem, I propose a structural approach that exponentially increases input feature dimension of units as unit index increases.” Experiments on Cifar100 show that proposed structure has the same or higher recognition accuracy than Pyramid ResNet and DenseNet for identical number of parameters (0.8M and 1.7M) with 1/2 to 1/3 computational cost.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectdeep convolutional neural network▼aObject recognition▼aObject Classification▼aCNN▼aResidual Network▼aDensely Connected Convolutional Neural Network▼aPyramidal Residual Network▼aCifar100-
dc.subject딥 컨벌루션 신경망▼a물체인식▼a물체분류▼a시엔엔▼a레지듀얼 네트워크▼a덴스넷▼a피라미달 레지듀얼 네트워크▼a사이파100-
dc.titlePerformance improvement of densely connected convolutional neural network by using exponentially increasing feature dimension-
dc.title.alternative지수적인 차원의 증가를 통한 Densely connected convolutional neural network에서의 성능 향상-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전기및전자공학부,-
dc.contributor.alternativeauthor함성욱-
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0