A Case for Memory-Centric HPC System Architecture for Training Deep Neural Networks

Cited 13 time in webofscience Cited 0 time in scopus
  • Hit : 360
  • Download : 0
As the models and the datasets to train deep learning (DL) models scale, system architects are faced with new challenges, one of which is the memory capacity bottleneck, where the limited physical memory inside the accelerator device constrains the algorithm that can be studied. We propose a memory-centric deep learning system that can transparently expand the memory capacity accessible to the accelerators while also providing fast inter-device communication for parallel training. Our proposal aggregates a pool of memory modules locally within the device-side interconnect, which are decoupled from the host interface and function as a vehicle for transparent memory capacity expansion. Compared to conventional systems, our proposal achieves an average 2.1× speedup on eight DL applications and increases the system-wide memory capacity to tens of TBs.
Publisher
IEEE COMPUTER SOC
Issue Date
2018-07
Language
English
Article Type
Article
Keywords

DESIGN

Citation

IEEE COMPUTER ARCHITECTURE LETTERS, v.17, no.2, pp.134 - 138

ISSN
1556-6056
DOI
10.1109/LCA.2018.2823302
URI
http://hdl.handle.net/10203/245423
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 13 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0