Sample-efficient deep reinforcement learning via episodic backward update에피소드 후향 업데이트를 통한 효율적인 심층강화학습

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 417
  • Download : 0
We propose Episodic Backward Update – a new algorithm to boost the performance of a deep reinforcement learning agent by a fast reward propagation. In contrast to the conventional use of the experience replay with uniform random sampling, our agent samples a whole episode and successively propagates the value of a state to its previous states. Our computationally efficient recursive algorithm allows sparse and delayed rewards to propagate efficiently through all transitions of a sampled episode. We evaluate our algorithm on 2D MNIST maze environment and 49 games of the Atari 2600 environment, and show that our method improves sample efficiency with a competitive amount of computational cost.
Advisors
Chung, Sae-Youngresearcher정세영researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2019
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2019.2,[iii, 29 p. :]

Keywords

deep reinforcement learning▼adeep Q-learning▼adeep neural network▼aexperience replay▼asample efficiency; 심층강화학습▼a심층 Q 러닝▼a심층 인공 신경망▼a경험 재현▼a샘플 효율성

URI
http://hdl.handle.net/10203/266919
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=843413&flag=dissertation
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0