Gradient pruning to accelerate ring-based all-reduce in distributed deep learning분산 딥 러닝에서 링 기반 All-Reduce를 가속화하기 위한 그라디언트 가지치기

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 3
  • Download : 0
Data-parallelism has been widely used in order to distribute the training of large Deep Neural Networks (DNN) among multiple workers (GPU/TPU). In the data-parallel context, a large proportion of the training time is spent on collective communication to synchronize gradients between model replicas on each device. It was observed that not all gradients are necessary for the model convergence and the gradient tensor could be greatly sparsified in order to reduce the communication volume. Prior works have proposed to sparsify gradients depending on their magnitude, approach commonly called top-k. However, top-k incurs computation overhead of bitonic sorting and has scalability issues on Ring All-Reduce due to the necessity to send indices. In this work, we show that random gradient pruning can achieve convergence at minimal accuracy loss. Moreover, we propose Skip-Reduce -- a novel approach to perform gradient pruning that does not rely on sending indices and instead modifies the underlying communication algorithm. Our approach has no computational overhead and is scalable on any number of devices.
Advisors
김동준researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2023
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2023.8,[iv, 29p. :]

Keywords

분산 딥 러닝▼a딥 러닝▼a집합 통신▼aGPU▼a멀티-GPU 시스템; Distributed deep learning▼aDeep learning▼aCollective communication▼aGPU▼aMulti-GPU systems

URI
http://hdl.handle.net/10203/320700
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1045931&flag=dissertation
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0