Parareal Neural Networks Emulating a Parallel-in-Time Algorithm

Cited 1 time in webofscience Cited 0 time in scopus
  • Hit : 13
  • Download : 0
As deep neural networks (DNNs) become deeper, the training time increases. In this perspective, multi-CPU parallel computing has become a key tool in accelerating the training of DNNs. In this article, we introduce a novel methodology to construct a parallel neural network that can utilize multiple GPUs simultaneously from a given DNN. We observe that layers of DNN can be interpreted as the time steps of a time-dependent problem and can be parallelized by emulating a parallel-in-time algorithm called parareal. The parareal algorithm consists of fine structures which can be implemented in parallel and a coarse structure that gives suitable approximations to the fine structures. By emulating it, the layers of DNN are torn to form a parallel structure, which is connected using a suitable coarse network. We report accelerated and accuracy-preserved results of the proposed methodology applied to VGG-16 and ResNet-1001 on several datasets.
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Issue Date
2024-05
Language
English
Article Type
Article; Early Access
Citation

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, v.35, no.5, pp.6353 - 6364

ISSN
2162-237X
DOI
10.1109/TNNLS.2022.3206797
URI
http://hdl.handle.net/10203/319890
Appears in Collection
RIMS Journal PapersMA-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 1 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0