DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Sung, Youngchul | - |
dc.contributor.advisor | 성영철 | - |
dc.contributor.author | Chae, Jongseong | - |
dc.date.accessioned | 2022-04-27T19:31:20Z | - |
dc.date.available | 2022-04-27T19:31:20Z | - |
dc.date.issued | 2021 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=948985&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/296002 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2021.2,[iv, 28 p. :] | - |
dc.description.abstract | RL (Reinforcement Learning) aims to produce a policy that can perform a given task well. The recent studies of RL show remarkable results in various fields such as robot control simulation and games, however, these studies fail to deploy their approaches into the real world. The main obstacles to deploying RL into the real world are well-designing dense reward issue and non-robust policy issue. RL heavily relies on well-designed dense reward and produces non-robust policy that dramatically deteriorates when it faces with environment which has unseen environment dynamics during training. Imitation RL is one among approaches to address well-designing dense reward issues. However, this Imitation RL also suffers from non-robust policy issue. In order to solve such issues, we propose a novel algorithm, called RAME, producing a robust policy through Generative Adversarial Imitation Learning while the agent can access multiple environment. The policy via our algorithm is trained in multiple training environments with expert trajectories in their own training environment. Discriminators and state discriminators make appropriate reward used to train the policy while taking into account the relation between given state-action pair and training environments. It is able to train with any training environments, not only well-selected training environments. The results of these experiments show that it is enable to produce more robust policy than the existing Imitation RL approaches. This implicates that the policy via RAME can adapt to multiple environments without relying on dense rewards. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | Robust Reinforcement Learning▼aImitation Reinforcement Learning▼aReinforcement Learning▼aRobust RL▼aImitation Learning | - |
dc.subject | 환경 변화에 강인한 강화학습▼a모방 강화학습▼a강화학습▼a로버스트 강화학습▼a모방학습 | - |
dc.title | Robust imitation learning | - |
dc.title.alternative | 환경변화에 강인한 모방 학습 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :전기및전자공학부, | - |
dc.contributor.alternativeauthor | 채종성 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.