DC Field | Value | Language |
---|---|---|
dc.contributor.author | Shin, Dongjoo | ko |
dc.contributor.author | Yoo, Hoi-Jun | ko |
dc.date.accessioned | 2021-03-26T02:35:19Z | - |
dc.date.available | 2021-03-26T02:35:19Z | - |
dc.date.created | 2020-08-10 | - |
dc.date.issued | 2020-08 | - |
dc.identifier.citation | PROCEEDINGS OF THE IEEE, v.108, no.8, pp.1245 - 1260 | - |
dc.identifier.issn | 0018-9219 | - |
dc.identifier.uri | http://hdl.handle.net/10203/281948 | - |
dc.description.abstract | Today's CPUs are general-purpose processors, which have the von Neumann architecture (including the Harvard architectures) to maximize the generality and programmability. On the other hand, application-specific integrated circuits (ASICs) have domain-specific architectures to optimize the cost-effective performance but show very low generality. The combination of generality and ASIC, which usually seemed to have no contact, is expected to be enabled by deep learning (DL). DL, realized with deep neural networks (DNNs), has changed the paradigm of machine learning (ML) and brought significant progress in vision, speech, language processing, and many other applications. DNNs have special features that can be efficiently implemented with dedicated architectures, ASICs. Sharing their special features, DNNs have a wide variety of network architectures, and even the same network architecture can be used for different applications depending on the weight parameters. This paper aims to provide the necessity, validity, and characteristics of the ML-specific integrated circuits (MSICs) that have a different architecture from the von Neumann architecture. MSICs can avoid the overhead from the complex instruction set, instruction decoder, multilevel caches, and branch prediction of the recent von Neumann architecture processors designed for high generality and programmability. We will also discuss the necessity and validity of a heterogeneous architecture in MSIC, starting from the differences between the visual-type information processing and the vector-type information processing, and show the chip implementation results. | - |
dc.language | English | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.title | The Heterogeneous Deep Neural Network Processor With a Non-von Neumann Architecture | - |
dc.type | Article | - |
dc.identifier.wosid | 000550650500004 | - |
dc.identifier.scopusid | 2-s2.0-85062410003 | - |
dc.type.rims | ART | - |
dc.citation.volume | 108 | - |
dc.citation.issue | 8 | - |
dc.citation.beginningpage | 1245 | - |
dc.citation.endingpage | 1260 | - |
dc.citation.publicationname | PROCEEDINGS OF THE IEEE | - |
dc.identifier.doi | 10.1109/JPROC.2019.2897076 | - |
dc.contributor.localauthor | Yoo, Hoi-Jun | - |
dc.description.isOpenAccess | N | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | Computer architecture | - |
dc.subject.keywordAuthor | Hardware | - |
dc.subject.keywordAuthor | Biological neural networks | - |
dc.subject.keywordAuthor | Computers | - |
dc.subject.keywordAuthor | Recurrent neural networks | - |
dc.subject.keywordAuthor | Deep learning | - |
dc.subject.keywordAuthor | Application-specific integrated circuit (ASIC) | - |
dc.subject.keywordAuthor | convolutional neural networks (CNNs) | - |
dc.subject.keywordAuthor | deep learning (DL) | - |
dc.subject.keywordAuthor | heterogeneous architecture | - |
dc.subject.keywordAuthor | machine learning (ML) | - |
dc.subject.keywordAuthor | multilayer perceptrons (MLPs) | - |
dc.subject.keywordAuthor | neural network hardware | - |
dc.subject.keywordAuthor | neural networks | - |
dc.subject.keywordAuthor | non-von Neumann architecture | - |
dc.subject.keywordAuthor | recurrent neural networks (RNNs) | - |
dc.subject.keywordPlus | BACKPROPAGATION | - |
dc.subject.keywordPlus | DESIGN | - |
dc.subject.keywordPlus | GAME | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.