Accelerating GNN Training with Locality-Aware Partial Execution

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 131
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKim, Taehyunko
dc.contributor.authorPark, Kyoung-Sooko
dc.contributor.authorHwang, Chanhoko
dc.contributor.authorLin, Zhiqiko
dc.contributor.authorChang, Pengko
dc.contributor.authorMiao, Youshanko
dc.contributor.authorMa, Lingxiaoko
dc.contributor.authorXiong, Yongqiangko
dc.date.accessioned2021-12-15T06:48:46Z-
dc.date.available2021-12-15T06:48:46Z-
dc.date.created2021-11-25-
dc.date.created2021-11-25-
dc.date.issued2021-08-24-
dc.identifier.citation12th ACM SIGOPS Asia-Pacific Workshop on Systems, APSys 2021-
dc.identifier.urihttp://hdl.handle.net/10203/290684-
dc.description.abstractGraph Neural Networks (GNNs) are increasingly popular for various prediction and recommendation tasks. Unfortunately, the graph datasets for practical GNN applications are often too large to fit into the memory of a single GPU, leading to frequent data loading from host memory to GPU. This data transfer overhead is highly detrimental to the performance, severely limiting the training throughput. In this paper, we propose locality-aware, partial code execution that significantly cuts down the data copy overhead for GNN training. The key idea is to exploit the "near-data" processors for the first few operations in each iteration, which reduces the data size for DMA operations. In addition, we employ task scheduling tailored to GNN training and apply load balancing between CPU and GPU. We find that our approach substantially improves the performance, achieving up to 6.6x speedup in training throughput over the state-of-the-art system design.-
dc.languageEnglish-
dc.publisherACM-
dc.titleAccelerating GNN Training with Locality-Aware Partial Execution-
dc.typeConference-
dc.identifier.scopusid2-s2.0-85118159020-
dc.type.rimsCONF-
dc.citation.publicationname12th ACM SIGOPS Asia-Pacific Workshop on Systems, APSys 2021-
dc.identifier.conferencecountryHK-
dc.identifier.conferencelocationVirtual-
dc.identifier.doi10.1145/3476886.3477515-
dc.contributor.localauthorPark, Kyoung-Soo-
dc.contributor.nonIdAuthorLin, Zhiqi-
dc.contributor.nonIdAuthorChang, Peng-
dc.contributor.nonIdAuthorMiao, Youshan-
dc.contributor.nonIdAuthorMa, Lingxiao-
dc.contributor.nonIdAuthorXiong, Yongqiang-
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0