EGCN: An Efficient GCN Accelerator for Minimizing Off-Chip Memory Access

Cited 3 time in webofscience Cited 0 time in scopus
  • Hit : 230
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorHan, Yunkiko
dc.contributor.authorPark, Kangkyuko
dc.contributor.authorJung, Youngbeomko
dc.contributor.authorKim, Lee-Supko
dc.date.accessioned2022-11-23T08:00:32Z-
dc.date.available2022-11-23T08:00:32Z-
dc.date.created2022-11-23-
dc.date.created2022-11-23-
dc.date.issued2022-12-
dc.identifier.citationIEEE TRANSACTIONS ON COMPUTERS, v.71, no.12, pp.3127 - 3139-
dc.identifier.issn0018-9340-
dc.identifier.urihttp://hdl.handle.net/10203/300587-
dc.description.abstractAs Graph Convolutional Networks (GCNs) have emerged as a promising solution for graph representation learning, designing specialized GCN accelerators has become an important challenge. An analysis of GCN workloads shows that the main bottleneck of GCN processing is not computation but the memory latency of intensive off-chip data transfer. Therefore, minimizing off-chip data transfer is the primary challenge for designing an efficient GCN accelerator. To address this challenge, optimization is initialized by considering GCNs as tiled matrix multiplication. In this paper, we optimize off-chip memory access from both the in- and out-of-tile perspectives. From the out-of-tile perspective, we find optimal tile configurations of given datasets and on-chip buffer capacity, then observe the dataflow across phases and layers. Inter-layer phase fusion dataflow with optimal tile configuration reduces data transfer of intermediate outputs. From the in-tile perspective, due to the sparsity of tiles, tiles have redundant data which does not participate in computation. Redundant data load is eliminated with hardware support. Finally, we introduce an efficient GCN inference accelerator, EGCN, specialized for minimizing off-chip memory access. EGCN achieves 41.9% off-chip DRAM access reduction, 1.49× speedup, and 1.95× energy efficiency improvement on average over the state-of-the-art accelerators.-
dc.languageEnglish-
dc.publisherIEEE COMPUTER SOC-
dc.titleEGCN: An Efficient GCN Accelerator for Minimizing Off-Chip Memory Access-
dc.typeArticle-
dc.identifier.wosid000886309300006-
dc.identifier.scopusid2-s2.0-85139829946-
dc.type.rimsART-
dc.citation.volume71-
dc.citation.issue12-
dc.citation.beginningpage3127-
dc.citation.endingpage3139-
dc.citation.publicationnameIEEE TRANSACTIONS ON COMPUTERS-
dc.identifier.doi10.1109/TC.2022.3211413-
dc.contributor.localauthorKim, Lee-Sup-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorGraph convolutional networks-
dc.subject.keywordAuthorhardware architecture-
dc.subject.keywordAuthordomain-specific accelerators-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 3 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0