A 2.1TFLOPS/W Mobile Deep RL Accelerator with Transposable PE Array and Experience Compression

Cited 37 time in webofscience Cited 30 time in scopus
  • Hit : 317
  • Download : 0
Recently, deep neural networks (DNNs) are actively used for object recognition, but also for action control, so that an autonomous system, such as the robot, can perform human-like behaviors and operations. Unlike recognition tasks, real-time operation is important in action control, and it is too slow to use remote learning on a server communicating through a network. New learning techniques, such as reinforcement learning (RL), are needed to determine and select the correct robot behavior locally. Fig. 7.4.1(a) shows an example of a robot agent that uses a pre-trained DNN without RL, and Fig. 7.4.1(b) depicts an autonomous robot agent that learns continuously in the environment using RL. The agent without RL falls down if the land slope changes, but the RL-based agent iteratively collects walking experiences and learns to walk even though the land slope changes.
Publisher
Institute of Electrical and Electronics Engineers Inc.
Issue Date
2019-02
Language
English
Citation

2019 IEEE International Solid-State Circuits Conference, ISSCC 2019, pp.136 - 138

DOI
10.1109/ISSCC.2019.8662447
URI
http://hdl.handle.net/10203/268664
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 37 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0