Learning-Driven Exploration for Reinforcement Learning

Cited 4 time in webofscience Cited 0 time in scopus
  • Hit : 74
  • Download : 0
Effective and intelligent exploration remains an unresolved problem for reinforcement learning. Most contemporary reinforcement learning relies on simple heuristic strategies which are unable to intelligently distinguish the well-explored and the unexplored regions of state space, which can lead to inefficient use of training time. We introduce entropy-based exploration (EBE) that enables an agent to explore efficiently the unexplored regions of state space. EBE quantifies the agent's learning in a state using state-dependent action values and adaptively explores the state space, i.e. more exploration for the unexplored region of the state space. We perform experiments on a diverse set of environments and demonstrate that EBE enables efficient exploration that ultimately results in faster learning without having to tune any hyperparameter. The code to reproduce the experiments is given at https://github.com/Usama1002/ EBE-Exploration and the supplementary video is given at https://youtu.be/nJggIjjzKic.
Publisher
ICROS (Institute of Control, Robotics and Systems)
Issue Date
2021-10-13
Language
English
Citation

21st International Conference on Control, Automation and Systems (ICCAS), pp.1146 - 1151

ISSN
2093-7121
DOI
10.23919/ICCAS52745.2021.9649810
URI
http://hdl.handle.net/10203/289701
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 4 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0