DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lee, Yunjae | ko |
dc.contributor.author | Kwon, Youngeun | ko |
dc.contributor.author | Rhu, Minsoo | ko |
dc.date.accessioned | 2021-09-26T01:31:08Z | - |
dc.date.available | 2021-09-26T01:31:08Z | - |
dc.date.created | 2021-09-24 | - |
dc.date.created | 2021-09-24 | - |
dc.date.issued | 2021-07 | - |
dc.identifier.citation | IEEE COMPUTER ARCHITECTURE LETTERS, v.20, no.2, pp.118 - 121 | - |
dc.identifier.issn | 1556-6056 | - |
dc.identifier.uri | http://hdl.handle.net/10203/287866 | - |
dc.description.abstract | Graph neural networks (GNNs) can extract features by learning both the representation of each objects (i.e., graph nodes) as well as the relationship across different objects (i.e., the edges that connect nodes), achieving state-of-the-art performance on a wide range of graph-based tasks. Despite its strengths, utilizing these algorithms in a production environment faces several key challenges as the number of graph nodes and edges amount to several billions to hundreds of billions scale, requiring substantial storage space for training. Unfortunately, existing ML frameworks based on the in-memory processing model significantly hamper the productivity of algorithm developers as it mandates the overall working set to fit within DRAM capacity constraints. In this work, we first study state-of-the-art, large-scale GNN training algorithms. We then conduct a detailed characterization on utilizing capacity-optimized non-volatile memory solutions for storing memory-hungry GNN data, exploring the feasibility of SSDs for large-scale GNN training. | - |
dc.language | English | - |
dc.publisher | IEEE COMPUTER SOCIETY | - |
dc.title | Understanding the Implication of Non-Volatile Memory for Large-Scale Graph Neural Network Training | - |
dc.type | Article | - |
dc.identifier.wosid | 000693756200001 | - |
dc.identifier.scopusid | 2-s2.0-85114793693 | - |
dc.type.rims | ART | - |
dc.citation.volume | 20 | - |
dc.citation.issue | 2 | - |
dc.citation.beginningpage | 118 | - |
dc.citation.endingpage | 121 | - |
dc.citation.publicationname | IEEE COMPUTER ARCHITECTURE LETTERS | - |
dc.identifier.doi | 10.1109/LCA.2021.3098943 | - |
dc.contributor.localauthor | Rhu, Minsoo | - |
dc.description.isOpenAccess | N | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | Training | - |
dc.subject.keywordAuthor | Productivity | - |
dc.subject.keywordAuthor | Runtime | - |
dc.subject.keywordAuthor | Nonvolatile memory | - |
dc.subject.keywordAuthor | Memory management | - |
dc.subject.keywordAuthor | Random access memory | - |
dc.subject.keywordAuthor | Tools | - |
dc.subject.keywordAuthor | Graph neural network | - |
dc.subject.keywordAuthor | data preparation | - |
dc.subject.keywordAuthor | training | - |
dc.subject.keywordAuthor | non-volatile memory | - |
dc.subject.keywordAuthor | characterization | - |
dc.subject.keywordAuthor | SSD | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.