DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Kim, Lee-Sup | - |
dc.contributor.advisor | 김이섭 | - |
dc.contributor.author | Kim, Hyeonuk | - |
dc.date.accessioned | 2018-06-20T06:21:32Z | - |
dc.date.available | 2018-06-20T06:21:32Z | - |
dc.date.issued | 2017 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=675372&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/243266 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2017.2,[v, 54 p. :] | - |
dc.description.abstract | The excellence of Convolutional Neural Network (CNN) is proved by state-of-the-art performances in various vision applications including object recognition and classification. Accordingly, many attempts are being made to embed CNNs into mobile devices. Binary-weight CNN is one of the most efficient solutions for the mobile CNN due to its drastically reduced parameter size. However, a large number of convolutions are still required for processing each image. The massive operations increase the amount of energy consumption of the devices, thus making the battery lifetime shorter. To address this problem, we propose a novel kernel decomposition architecture, based on the observation that a large number of operations in binary-weight CNN are redundant. In this architecture, all kernels are decomposed into two sub-kernels such that they have the same part in common. In result, the number of operations for each image is reduced down to 52.3% by skipping the redundant operations. Furthermore, a low-cost bit quantization technique is implemented using the relative scales of CNN feature data to increase energy-efficiency. Experimental results show 22% reduction of computing energy and 72% reduction of memory access energy with negligible accuracy losses. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | Neural network | - |
dc.subject | CNN | - |
dc.subject | Binary-weight CNN | - |
dc.subject | Kernel decomposition | - |
dc.subject | Bit quantization | - |
dc.subject | 뉴럴 네트워크 | - |
dc.subject | 바이너리 웨이트 CNN | - |
dc.subject | 커널 분리 | - |
dc.subject | 비트 감소 | - |
dc.title | (A) kernel decomposition architecture for binary-weight convolutional neural networks | - |
dc.title.alternative | 바이너리 웨이트 CNN을 위한 커널 분리 구조 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :전기및전자공학부, | - |
dc.contributor.alternativeauthor | 김현욱 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.