(A) kernel decomposition architecture for binary-weight convolutional neural networks바이너리 웨이트 CNN을 위한 커널 분리 구조

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 600
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorKim, Lee-Sup-
dc.contributor.advisor김이섭-
dc.contributor.authorKim, Hyeonuk-
dc.date.accessioned2018-06-20T06:21:32Z-
dc.date.available2018-06-20T06:21:32Z-
dc.date.issued2017-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=675372&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/243266-
dc.description학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2017.2,[v, 54 p. :]-
dc.description.abstractThe excellence of Convolutional Neural Network (CNN) is proved by state-of-the-art performances in various vision applications including object recognition and classification. Accordingly, many attempts are being made to embed CNNs into mobile devices. Binary-weight CNN is one of the most efficient solutions for the mobile CNN due to its drastically reduced parameter size. However, a large number of convolutions are still required for processing each image. The massive operations increase the amount of energy consumption of the devices, thus making the battery lifetime shorter. To address this problem, we propose a novel kernel decomposition architecture, based on the observation that a large number of operations in binary-weight CNN are redundant. In this architecture, all kernels are decomposed into two sub-kernels such that they have the same part in common. In result, the number of operations for each image is reduced down to 52.3% by skipping the redundant operations. Furthermore, a low-cost bit quantization technique is implemented using the relative scales of CNN feature data to increase energy-efficiency. Experimental results show 22% reduction of computing energy and 72% reduction of memory access energy with negligible accuracy losses.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectNeural network-
dc.subjectCNN-
dc.subjectBinary-weight CNN-
dc.subjectKernel decomposition-
dc.subjectBit quantization-
dc.subject뉴럴 네트워크-
dc.subject바이너리 웨이트 CNN-
dc.subject커널 분리-
dc.subject비트 감소-
dc.title(A) kernel decomposition architecture for binary-weight convolutional neural networks-
dc.title.alternative바이너리 웨이트 CNN을 위한 커널 분리 구조-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전기및전자공학부,-
dc.contributor.alternativeauthor김현욱-
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0