Robust imitation learning환경변화에 강인한 모방 학습

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 103
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorSung, Youngchul-
dc.contributor.advisor성영철-
dc.contributor.authorChae, Jongseong-
dc.date.accessioned2022-04-27T19:31:20Z-
dc.date.available2022-04-27T19:31:20Z-
dc.date.issued2021-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=948985&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/296002-
dc.description학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2021.2,[iv, 28 p. :]-
dc.description.abstractRL (Reinforcement Learning) aims to produce a policy that can perform a given task well. The recent studies of RL show remarkable results in various fields such as robot control simulation and games, however, these studies fail to deploy their approaches into the real world. The main obstacles to deploying RL into the real world are well-designing dense reward issue and non-robust policy issue. RL heavily relies on well-designed dense reward and produces non-robust policy that dramatically deteriorates when it faces with environment which has unseen environment dynamics during training. Imitation RL is one among approaches to address well-designing dense reward issues. However, this Imitation RL also suffers from non-robust policy issue. In order to solve such issues, we propose a novel algorithm, called RAME, producing a robust policy through Generative Adversarial Imitation Learning while the agent can access multiple environment. The policy via our algorithm is trained in multiple training environments with expert trajectories in their own training environment. Discriminators and state discriminators make appropriate reward used to train the policy while taking into account the relation between given state-action pair and training environments. It is able to train with any training environments, not only well-selected training environments. The results of these experiments show that it is enable to produce more robust policy than the existing Imitation RL approaches. This implicates that the policy via RAME can adapt to multiple environments without relying on dense rewards.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectRobust Reinforcement Learning▼aImitation Reinforcement Learning▼aReinforcement Learning▼aRobust RL▼aImitation Learning-
dc.subject환경 변화에 강인한 강화학습▼a모방 강화학습▼a강화학습▼a로버스트 강화학습▼a모방학습-
dc.titleRobust imitation learning-
dc.title.alternative환경변화에 강인한 모방 학습-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전기및전자공학부,-
dc.contributor.alternativeauthor채종성-
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0