Meta distillation for reinforcement learning강화 학습을 위한 메타 디스틸레이션

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 221
  • Download : 0
As active research of deep reinforcement learning makes it possible to apply reinforcement learning to many high-dimensional environments, the sample efficiency of reinforcement learning has been more important. Learning strategy that utilizes background knowledge from previous tasks to new tasks, such as transfer learning and meta-learning, is one common approach for enhancing sample efficiency. In this work, we propose a meta-learning framework Meta-Distillation for Reinforcement Learning (MDRL) that efficiently transfers expert policies from previous environments to a new policy in an unseen environment. A weighted sum of discrepancies between current policy and expert policies is added to policy update loss, and the weights are determined by a weight network that is meta-trained to help training by considering tasks, training sample, and policy training progress. MDRL succeed to data-efficiently adapt new task when given distribution of environment is scarce and diverse.
Advisors
Hwang, Sung Juresearcher황성주researcher
Description
한국과학기술원 :전산학부,
Publisher
한국과학기술원
Issue Date
2020
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전산학부, 2020.8,[ii, 15 p. :]

Keywords

Reinforcement Learning▼aMeta-Learning▼aMeta-RL▼aTransfer Learning▼aDistillation; 강화 학습▼a메타 학습▼a메타 강화 학습▼a전이 학습▼a디스틸레이션

URI
http://hdl.handle.net/10203/284995
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=925156&flag=dissertation
Appears in Collection
CS-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0