Gradual Net : Unconstrained control of feature map size using non-integer strided sampling비정수 간격 샘플링을 이용한 딥 뉴럴 네트워크의 제한 없는 특징 맵 크기 조절에 관한 연구

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 325
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorKim, Junmo-
dc.contributor.advisor김준모-
dc.contributor.authorJoo, Donggyu-
dc.date.accessioned2019-09-04T02:41:01Z-
dc.date.available2019-09-04T02:41:01Z-
dc.date.issued2018-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=734057&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/266748-
dc.description학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2018.2,[iii, 22 p. :]-
dc.description.abstractConvolutional neural network (CNN) is mainly composed of convolution, pooling, and non-linear activation layers. Nowadays, almost all networks use only $2 \times 2$ max pooling or convolution layers with stride of 2 for down-sampling. This technique is known to be good at extracting good feature, but it also has the constraint that feature map size is always reduced dramatically to half. In this work, we propose a simple new sampling technique that we call non-integer strided sampling (NSS), which enables free feature map size change, so that it is not always reduced to half. Using this NSS layer, we design a new type of network architecture, GradualNet, which makes the feature map size change more smoothly than it is in existing networks.Our results showed that NSS can improve the performance of networks without having more parameters. Especially, it shows 1.82% accuracy improvement with CIFAR-100 without data augmentation compared to the baseline ResNet. Moreover, we propose other interesting possibilities for a CNN architecture based on the NSS layer. The results revealed that previous networks have been stuck in a stereotype, and this could be an important discovery in CNN architecture that has the potential to resolve this stereotype.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectDeep learning▼amachine learning▼aconvolutional neural network▼aneural network▼asampling▼afeature map size▼aimage classification-
dc.subject딥 러닝▼a머신 러닝▼a컨볼루셔널 뉴럴 네트워크▼a뉴럴 네트워크▼a샘플링▼a특징 맵 크기▼a이미지 분류-
dc.titleGradual Net-
dc.title.alternative비정수 간격 샘플링을 이용한 딥 뉴럴 네트워크의 제한 없는 특징 맵 크기 조절에 관한 연구-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전기및전자공학부,-
dc.contributor.alternativeauthor주동규-
dc.title.subtitleUnconstrained control of feature map size using non-integer strided sampling-
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0