DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | 김동준 | - |
dc.contributor.author | Kainolda, Yassawe | - |
dc.contributor.author | 카이놀다 야사위 | - |
dc.date.accessioned | 2024-07-25T19:31:19Z | - |
dc.date.available | 2024-07-25T19:31:19Z | - |
dc.date.issued | 2023 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1045931&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/320700 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2023.8,[iv, 29p. :] | - |
dc.description.abstract | Data-parallelism has been widely used in order to distribute the training of large Deep Neural Networks (DNN) among multiple workers (GPU/TPU). In the data-parallel context, a large proportion of the training time is spent on collective communication to synchronize gradients between model replicas on each device. It was observed that not all gradients are necessary for the model convergence and the gradient tensor could be greatly sparsified in order to reduce the communication volume. Prior works have proposed to sparsify gradients depending on their magnitude, approach commonly called top-k. However, top-k incurs computation overhead of bitonic sorting and has scalability issues on Ring All-Reduce due to the necessity to send indices. In this work, we show that random gradient pruning can achieve convergence at minimal accuracy loss. Moreover, we propose Skip-Reduce -- a novel approach to perform gradient pruning that does not rely on sending indices and instead modifies the underlying communication algorithm. Our approach has no computational overhead and is scalable on any number of devices. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | 분산 딥 러닝▼a딥 러닝▼a집합 통신▼aGPU▼a멀티-GPU 시스템 | - |
dc.subject | Distributed deep learning▼aDeep learning▼aCollective communication▼aGPU▼aMulti-GPU systems | - |
dc.title | Gradient pruning to accelerate ring-based all-reduce in distributed deep learning | - |
dc.title.alternative | 분산 딥 러닝에서 링 기반 All-Reduce를 가속화하기 위한 그라디언트 가지치기 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :전기및전자공학부, | - |
dc.contributor.alternativeauthor | Kim, Dongjun | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.