DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Sung, Youngchul | - |
dc.contributor.advisor | 성영철 | - |
dc.contributor.author | Cho, Myungsik | - |
dc.date.accessioned | 2019-09-04T02:42:21Z | - |
dc.date.available | 2019-09-04T02:42:21Z | - |
dc.date.issued | 2019 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=843428&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/266817 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2019.2,[iv, 20 p. :] | - |
dc.description.abstract | Most deep reinforcement learning algorithms are sample inefficient in complex and rich environments, so they need a large amount of sample to adapt to a new task. However, in the real world, adapting a new task quickly with a small amount of sample is essential. One way to solve this problem is the meta-learning that learns how to learn, and studies on meta-learning have been performed. However, prior meta-learning methods only consider the one model for adapting a new task, but having the only model for adaptation is not enough for more complex tasks. In this work, we propose a meta-learning method with multiple models for adapting to a new task in reinforcement learning (meta-RL). The proposed meta-RL algorithm is evaluated on a variety of locomotion tasks, and we show that the proposed algorithm is more effective at learning a new task. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | Reinforcement learning▼ameta learning | - |
dc.subject | 강화 학습▼a메타 러닝 | - |
dc.title | (The) meta reinforcement learning with multiple models | - |
dc.title.alternative | 다중 모델을 이용한 메타 강화 학습 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :전기및전자공학부, | - |
dc.contributor.alternativeauthor | 조명식 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.