A 146.52 TOPS/W Deep-Neural-Network Learning Processor with Stochastic Coarse-Fine Pruning and Adaptive Input/Output/Weight Skipping

Cited 3 time in webofscience Cited 10 time in scopus
  • Hit : 504
  • Download : 0
An energy efficient Deep-Neural-Network (DNN) learning processor is proposed for on-chip learning and iterative weight pruning (WP). This work has three key features: 1) stochastic coarse-fine pruning reduced computation workload by 99.7% compared with previous WP algorithm while maintaining high weight sparsity, 2) adaptive input/output/weight skipping (AIOWS) achieved 30.1× higher throughput than previous DNN learning processor [1] for not only the inference but also learning, 3) weight memory shared pruning unit removed on-chip weight memory access for WP. As a result, this work shows 146.52 TOPS/W energy efficiency, which is 5.79× higher than the state-of-the-art [1].
Publisher
Institute of Electrical and Electronics Engineers Inc.
Issue Date
2020-06-16
Language
English
Citation

IEEE Symposium on VLSI Circuits, VLSI Circuits 2020

DOI
10.1109/VLSICircuits18222.2020.9162795
URI
http://hdl.handle.net/10203/278516
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 3 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0