High energy efficient DNN inference and training processor with sparsity optimization and neuromorphic computing희소성 최적화 및 뉴로모픽 컴퓨팅을 활용한 고에너지 효율 DNN 추론 및 학습 프로세서

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 7
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisor유회준-
dc.contributor.author김상엽-
dc.contributor.authorKim, Sangyeob-
dc.date.accessioned2024-07-26T19:30:52Z-
dc.date.available2024-07-26T19:30:52Z-
dc.date.issued2023-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1047241&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/320946-
dc.description학위논문(박사) - 한국과학기술원 : 전기및전자공학부, 2023.8,[viii. 171 p. :]-
dc.description.abstractThis paper focuses on sparsity utilization in a neural network (DNN) for energy-efficient processing of mobile devices and an optimization method through neuromorphic computing. While previous accelerators have utilized sparsity in DNN to achieve energy efficiency, they have been limited to handling sparsity that occurs in input due to activation functions, and their sparse processing methods are only effective for DNN inference. In this paper, four DNN accelerators (TSUNAMI, SNPU, C-DNN, and Neuro-CIM) are proposed to validate the new sparsity utilization techniques in DNN sequentially. TSUNAMI generates weight sparsity through iterative pruning in addition to input and output sparsity, and achieves consistently high computational speedup during DNN training by skipping computations for two types of high sparsity among input/weight/output. SNPU increases input sparsity by performing spike encoding for inputs of DNN and integrates newly proposed spike encoders to reduce spike frequency, enabling low-power always-on applications. C-DNN allocates inputs with low spike frequency to neuromorphic computing and inputs with high spike frequency, which can increase power consumption, to conventional deep neural network computing, achieving high energy efficiency. Neuro-CIM implements the such as accumulation and firing logic of neuromorphic computing as in-memory operations, and achieves low power consumption by designing without analog-digital converters, which consume significant power in previous in-memory computing structures.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subject심층 신경망 가속기▼a희소성▼a뉴로모픽 컴퓨팅▼a인메모리 연산-
dc.subjectDeep neural network accelerator▼asparsity▼aneuromorphic computing▼ain-memory processing-
dc.titleHigh energy efficient DNN inference and training processor with sparsity optimization and neuromorphic computing-
dc.title.alternative희소성 최적화 및 뉴로모픽 컴퓨팅을 활용한 고에너지 효율 DNN 추론 및 학습 프로세서-
dc.typeThesis(Ph.D)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전기및전자공학부,-
dc.contributor.alternativeauthorYoo, Hoi-Jun-
Appears in Collection
EE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0