Quantization-Error-Robust Deep Neural Network for Embedded Accelerators

Cited 1 time in webofscience Cited 0 time in scopus
  • Hit : 131
  • Download : 0
Quantization with low precision has become an essential technique for adopting deep neural networks in energy-and memory-constrained devices. However, there is a limit to the reducing precision by the inevitable loss of accuracy due to the quantization error. To overcome this obstacle, we propose methods reforming and quantizing a network that achieves high accuracy even at low precision without any runtime overhead in embedded accelerators. Our proposition consists of two analytical approaches: 1) network optimization to find the most error-resilient equivalent network in the precision-constrained environment and 2) quantization exploiting adaptive rounding offset control. The experimental results show accuracies of up to 98.31% and 99.96% of floating-point results in 6-bit and 8-bit quantization networks, respectively. Besides, our methods allow the lower precision accelerator design, reducing the energy consumption by 8.5%.
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Issue Date
2022-02
Language
English
Article Type
Article
Citation

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, v.69, no.2, pp.609 - 613

ISSN
1549-7747
DOI
10.1109/TCSII.2021.3103192
URI
http://hdl.handle.net/10203/292219
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 1 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0