Practical evolutionary reinforcement learning with enhanced sample efficiency향상된 샘플 효율성을 통한 실제적인 진화론적 강화학습

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 2
  • Download : 0
We introduce scalable Evolutionary Reinforcement Learning (ERL) Algorithms that combine Evolutionary Algorithm (EA) with Reinforcement Learning (RL) for better sample efficiency. It is widely known that the EA algorithms are less sample efficient than RL algorithms, but they are more stable. Combining two algorithms leads to more sample-efficient performance and stability. In this thesis, we introduce new asynchronous actor and critic update rules for scaling ERL algorithms and apply them to real-world applications where the sample efficiency is more crucial than simulated environments. The selected real-world environment is a camera parameter control task that is difficult to build a simulator. It is shown that the proposed ERL algorithm can achieve higher performance with less samples than conventional RL algorithms in both simulated and real-world environments.
Advisors
권인소researcher
Description
한국과학기술원 :로봇공학학제전공,
Publisher
한국과학기술원
Issue Date
2023
Identifier
325007
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 로봇공학학제전공, 2023.8,[vi, 55 p. :]

Keywords

진화론적 강화학습▼a강화학습▼a진화 전략▼a카메라 컨트롤; Evolutionary reinforcement learning▼aAsynchronous algorithm▼aEvolution strategy▼aCamera control

URI
http://hdl.handle.net/10203/320821
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1046592&flag=dissertation
Appears in Collection
RE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0