(A) NAND flash-based deep neural network accelerator exploiting bit-level sparsity비트 레벨 희소성을 활용하는 낸드 플래시 기반 DNN 가속기 구조

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 168
  • Download : 0
As data communication accounts for most of the energy consumption and the latency when running DNN applications, processing in-memory (PIM) approach that combines the role of the memory and the processor appears. However, previous works on the conventional memories DRAM and SRAM have a limitation of low energy-/area- efficiency. Meanwhile, NAND flash has not been considered as the proper platform for PIM due to its slow and less energy-efficient memory operation. Nevertheless, it has sufficient potential to solve the data communication issue effectively with its high-density and non-volatile properties. In this thesis, a NAND flash-based DNN accelerator, called S-FLASH, that utilizes the properties of NAND flash is proposed. This thesis targets to achieve both high energy efficiency and large area efficiency among various platforms to execute DNN applications. To achieve energy-efficient computation, firstly, a current-sum based computation by utilizing a string structure of NAND flash is implemented. Furthermore, the bit width of partial multiplication by considering the analog-to-digital converter (ADC) resource, which limits the overall energy efficiency and the throughput of the system, is optimized. Lastly, a massive number of zero partial multiplication results from the bit-level sparsity to enhance both energy efficiency and throughput is exploited. The evaluation results show that the proposed method which utilizes the bit-level sparsity maximally achieves 8.23× performance gain. Also, S-FLASH delivers 19.01×, 4.45× higher energy efficiency with 3.46×, 24.85× more on-chip capacity per area than an DRAM- or SRAM-based DNN accelerator, respectively.
Advisors
Kim, Lee-Supresearcher김이섭researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2020
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2020.2,[iii, 47 p. :]

Keywords

DNN▼asparsity▼abit-level sparsity▼aprocessing in-memory▼aNAND flash; 딥뉴럴 네트워크▼a희소성▼a비트레벨 희소성▼a프로세싱 인 메모리▼a낸드 플래시

URI
http://hdl.handle.net/10203/295946
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=986354&flag=dissertation
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0