DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Kim, Junmo | - |
dc.contributor.advisor | 김준모 | - |
dc.contributor.author | Joo, Donggyu | - |
dc.date.accessioned | 2019-09-04T02:41:01Z | - |
dc.date.available | 2019-09-04T02:41:01Z | - |
dc.date.issued | 2018 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=734057&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/266748 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2018.2,[iii, 22 p. :] | - |
dc.description.abstract | Convolutional neural network (CNN) is mainly composed of convolution, pooling, and non-linear activation layers. Nowadays, almost all networks use only $2 \times 2$ max pooling or convolution layers with stride of 2 for down-sampling. This technique is known to be good at extracting good feature, but it also has the constraint that feature map size is always reduced dramatically to half. In this work, we propose a simple new sampling technique that we call non-integer strided sampling (NSS), which enables free feature map size change, so that it is not always reduced to half. Using this NSS layer, we design a new type of network architecture, GradualNet, which makes the feature map size change more smoothly than it is in existing networks.Our results showed that NSS can improve the performance of networks without having more parameters. Especially, it shows 1.82% accuracy improvement with CIFAR-100 without data augmentation compared to the baseline ResNet. Moreover, we propose other interesting possibilities for a CNN architecture based on the NSS layer. The results revealed that previous networks have been stuck in a stereotype, and this could be an important discovery in CNN architecture that has the potential to resolve this stereotype. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | Deep learning▼amachine learning▼aconvolutional neural network▼aneural network▼asampling▼afeature map size▼aimage classification | - |
dc.subject | 딥 러닝▼a머신 러닝▼a컨볼루셔널 뉴럴 네트워크▼a뉴럴 네트워크▼a샘플링▼a특징 맵 크기▼a이미지 분류 | - |
dc.title | Gradual Net | - |
dc.title.alternative | 비정수 간격 샘플링을 이용한 딥 뉴럴 네트워크의 제한 없는 특징 맵 크기 조절에 관한 연구 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :전기및전자공학부, | - |
dc.contributor.alternativeauthor | 주동규 | - |
dc.title.subtitle | Unconstrained control of feature map size using non-integer strided sampling | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.