DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kang, Sanghoon | ko |
dc.contributor.author | Park, Gwangtae | ko |
dc.contributor.author | Kim, Sangjin | ko |
dc.contributor.author | Kim, Soyeon | ko |
dc.contributor.author | Han, Donghyeon | ko |
dc.contributor.author | Yoo, Hoi-Jun | ko |
dc.date.accessioned | 2022-01-04T06:40:34Z | - |
dc.date.available | 2022-01-04T06:40:34Z | - |
dc.date.created | 2022-01-04 | - |
dc.date.created | 2022-01-04 | - |
dc.date.issued | 2021-12 | - |
dc.identifier.citation | IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS, v.11, no.4, pp.634 - 648 | - |
dc.identifier.issn | 2156-3357 | - |
dc.identifier.uri | http://hdl.handle.net/10203/291474 | - |
dc.description.abstract | This paper presents a detailed overview of sparsity exploitation in deep neural network (DNN) accelerators. Despite the algorithmic advancements which drove DNNs to become the standard of artificial intelligence (AI), DNNs' computational and memory overhead limits the deployment of off-the-shelf models on edge devices. Numerous optimizations have been widely studied to efficiently run DNN models on performance and energy-limited mobile devices, from both software's and hardware's perspective. Sparsity exploitation one of the mainstream optimization techniques, whose objective is to achieve higher efficiency and speed through avoiding redundant multiply-and-accumulate (MAC) operations cause by zero operands. This paper overviews previous contributions on sparsity exploitation from both the software and hardware side of views, with the newly proposed taxonomy to categorize and analyze the works. On the software side, different sparsification algorithms are explained, including pruning and output speculation. From the hardware's perspective, advancements in architectures to efficiently handle sparse DNN computation are elaborated. The proposed taxonomy will help classify previous accelerators easily, by which sparsity they exploit and how. In addition, related works on sparse processing-in-memory (PIM) architectures and similarity exploitation are briefly introduced. | - |
dc.language | English | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.title | An Overview of Sparsity Exploitation in CNNs for On-Device Intelligence With Software-Hardware Cross-Layer Optimizations | - |
dc.type | Article | - |
dc.identifier.wosid | 000730514000013 | - |
dc.identifier.scopusid | 2-s2.0-85117790606 | - |
dc.type.rims | ART | - |
dc.citation.volume | 11 | - |
dc.citation.issue | 4 | - |
dc.citation.beginningpage | 634 | - |
dc.citation.endingpage | 648 | - |
dc.citation.publicationname | IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS | - |
dc.identifier.doi | 10.1109/JETCAS.2021.3120417 | - |
dc.contributor.localauthor | Yoo, Hoi-Jun | - |
dc.contributor.nonIdAuthor | Kim, Soyeon | - |
dc.description.isOpenAccess | N | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | Circuits and systems | - |
dc.subject.keywordAuthor | Training | - |
dc.subject.keywordAuthor | Tensors | - |
dc.subject.keywordAuthor | Hardware | - |
dc.subject.keywordAuthor | Computer architecture | - |
dc.subject.keywordAuthor | Optimization | - |
dc.subject.keywordAuthor | Convolution | - |
dc.subject.keywordAuthor | On-device intelligence | - |
dc.subject.keywordAuthor | deep neural network (DNN) processor | - |
dc.subject.keywordAuthor | neural processing unit | - |
dc.subject.keywordAuthor | sparsity exploitation | - |
dc.subject.keywordAuthor | zero-skipping | - |
dc.subject.keywordAuthor | software-hardware co-design | - |
dc.subject.keywordPlus | CONVOLUTIONAL NEURAL-NETWORK | - |
dc.subject.keywordPlus | ACCELERATOR | - |
dc.subject.keywordPlus | ENERGY | - |
dc.subject.keywordPlus | PROCESSOR | - |
dc.subject.keywordPlus | DESIGN | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.