C-DNN: An Energy-Efficient Complementary Deep-Neural-Network Processor With Heterogeneous CNN/SNN Core Architecture

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 95
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKim, Sangyeobko
dc.contributor.authorKim, Soyeonko
dc.contributor.authorHong, Seongyonko
dc.contributor.authorKim, Sangjinko
dc.contributor.authorHan, Donghyeonko
dc.contributor.authorChoi, Jiwonko
dc.contributor.authorYoo, Hoi-Junko
dc.date.accessioned2024-01-18T08:00:40Z-
dc.date.available2024-01-18T08:00:40Z-
dc.date.created2023-12-06-
dc.date.issued2024-01-
dc.identifier.citationIEEE JOURNAL OF SOLID-STATE CIRCUITS, v.59, no.1, pp.157 - 172-
dc.identifier.issn0018-9200-
dc.identifier.urihttp://hdl.handle.net/10203/317899-
dc.description.abstractIn this article, we propose a complementary deep-neural-network (C-DNN) processor by combining convolutional neural network (CNN) and spiking neural network (SNN) to take advantage of them. The C-DNN processor can support both complementary inference and training with heterogeneous CNN and SNN core architecture. In addition, the C-DNN processor is the first DNN accelerator application-specific integrated circuit (ASIC) that can support CNN-SNN workload division by using their magnitude-energy tradeoff. The C-DNN processor integrates the CNN-SNN workload allocator and attention module to find a more energy-efficient network domain for each workload in DNN. They enable the C-DNN processor to operate at the energy optimal point. Moreover, the SNN processing element (PE) array with distributed L1 cache can reduce the redundant memory access for SNN processing, resulting in a 42.2%-49.1% reduction. For high energy-efficient DNN training, the C-DNN processor integrates the global counter and local delta-weight (LDW) unit to eliminate power-consuming counters for a forward delta-weight generation. Furthermore, the forward delta-weight-based sparsity generation (FDWSG) is proposed to reduce the number of operations for training by 31%-79% The C-DNN processor achieves an energy efficiency of 85.8 and 79.9 TOPS/W for inference with CIFAR-10 and CIFAR-100, respectively (VGG-16). Moreover, the C-DNN processor achieves ImageNet classification with state-of-the-art energy efficiency of 24.5 TOPS/W (ResNet-50). For training, the C-DNN processor achieves the state-of-the-art energy efficiency of 84.5 and 17.2 TOPS/W for CIFAR-10 and ImageNet, respectively. Furthermore, it achieves 77.1% accuracy for ImageNet training with ResNet-50.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleC-DNN: An Energy-Efficient Complementary Deep-Neural-Network Processor With Heterogeneous CNN/SNN Core Architecture-
dc.typeArticle-
dc.identifier.wosid001106594700001-
dc.identifier.scopusid2-s2.0-85177033095-
dc.type.rimsART-
dc.citation.volume59-
dc.citation.issue1-
dc.citation.beginningpage157-
dc.citation.endingpage172-
dc.citation.publicationnameIEEE JOURNAL OF SOLID-STATE CIRCUITS-
dc.identifier.doi10.1109/JSSC.2023.3330483-
dc.contributor.localauthorYoo, Hoi-Jun-
dc.contributor.nonIdAuthorKim, Soyeon-
dc.contributor.nonIdAuthorHong, Seongyon-
dc.contributor.nonIdAuthorChoi, Jiwon-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorApplication-specific integrated circuit (ASIC)-
dc.subject.keywordAuthorcomplementary deep neural network (C-DNN)-
dc.subject.keywordAuthorconvolutional neural network (CNN)-
dc.subject.keywordAuthordeep learning-
dc.subject.keywordAuthordeep neural network-
dc.subject.keywordAuthorspiking neural network (SNN)-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0