Link Bit-Error-Rate Requirement Analysis for Deep Neural Network Accelerators

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 55
  • Download : 0
In convolutional neural network (CNN) accelerators, the dominant power consumption is caused by the access of external data memory. In addition, power and area occupied by I/O interfaces maintaining low bit-error-rate, e.g., 1e-15, grow as the data rate increases. Considering the inherent error resilience of the inference process in machine learning applications, the requirement of error-free communication in the data-path is controversial. In this paper, a custom CNN accelerator integrating a channel emulator is designed by using an FPGA to analyze the effect of the BER of an I/O transceiver on the image classification accuracy. In order to implement a channel emulator, a digital-domain look-up-table (LUT)-based 12-tap FIR filter is employed to create inter-symbol interference (ISI), and a PRBS31 generator is used as a noise source. The implementation was evaluated by running the ImageNet dataset on the FPGA-based custom accelerator (Virtex Ultrascale+) implementing VGG-16. The results show that the BER up to 1e-4 in the memory access has a negligible impact on the inference accuracy.
Publisher
IEEE
Issue Date
2021-05
Language
English
Citation

IEEE International Symposium on Circuits and Systems (IEEE ISCAS)

ISSN
0271-4302
DOI
10.1109/ISCAS51556.2021.9401112
URI
http://hdl.handle.net/10203/288563
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0