Accelerating Deep Reinforcement Learning via Phase-Level Parallelism for Robotics Applications

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 17
  • Download : 0
Deep Reinforcement Learning (DRL) plays a critical role in controlling future intelligent machines like robots and drones. Constantly retrained by newly arriving real-world data, DRL provides optimal autonomous control solutions for adapting to ever-changing environments. However, DRL repeats inference and training that are computationally expensive on resource-constraint mobile/embedded platforms. Even worse, DRL produces a severe hardware underutilization problem due to its unique execution pattern. To overcome the inefficiency of DRL, we propose Train Early Start, a new execution pattern for building the efficient DRL algorithm. Train Early Start parallelizes the inference and training execution, hiding the serialized performance bottleneck and improving the hardware utilization dramatically. Compared to the state-of-the-art mobile SoC, Train Early Start achieves 1.42x speedup and 1.13x energy efficiency.
Publisher
IEEE COMPUTER SOC
Issue Date
2024-01
Language
English
Article Type
Article
Citation

IEEE COMPUTER ARCHITECTURE LETTERS, v.23, no.1, pp.41 - 44

ISSN
1556-6056
DOI
10.1109/LCA.2023.3341152
URI
http://hdl.handle.net/10203/322474
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0