Skill-based Meta-Reinforcement Learning

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 269
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorNam, Taewookko
dc.contributor.authorSun, Shao-Huako
dc.contributor.authorPertsch, Karlko
dc.contributor.authorHwang, Sung Juko
dc.contributor.authorLim, Joseph Jaewhanko
dc.date.accessioned2022-12-09T03:00:18Z-
dc.date.available2022-12-09T03:00:18Z-
dc.date.created2022-12-04-
dc.date.created2022-12-04-
dc.date.issued2022-04-25-
dc.identifier.citation10th International Conference on Learning Representations, ICLR 2022-
dc.identifier.urihttp://hdl.handle.net/10203/302251-
dc.description.abstractWhile deep reinforcement learning methods have shown impressive results in robot learning, their sample inefficiency makes the learning of complex, long-horizon behaviors with real robot systems infeasible. To mitigate this issue, meta-reinforcement learning methods aim to enable fast learning on novel tasks by learning how to learn. Yet, the application has been limited to short-horizon tasks with dense rewards. To enable learning long-horizon behaviors, recent works have explored leveraging prior experience in the form of offline datasets without reward or task annotations. While these approaches yield improved sample efficiency, millions of interactions with environments are still required to solve complex tasks. In this work, we devise a method that enables meta-learning on long-horizon, sparse-reward tasks, allowing us to solve unseen target tasks with orders of magnitude fewer environment interactions. Our core idea is to leverage prior experience extracted from offline datasets during meta-learning. Specifically, we propose to (1) extract reusable skills and a skill prior from offline datasets, (2) meta-train a high-level policy that learns to efficiently compose learned skills into long-horizon behaviors, and (3) rapidly adapt the meta-trained policy to solve an unseen target task. Experimental results on continuous control tasks in navigation and manipulation demonstrate that the proposed method can efficiently solve long-horizon novel target tasks by combining the strengths of meta-learning and the usage of offline datasets, while prior approaches in RL, meta-RL, and multi-task RL require substantially more environment interactions to solve the tasks.-
dc.languageEnglish-
dc.publisherInternational Conference on Learning Representations-
dc.titleSkill-based Meta-Reinforcement Learning-
dc.typeConference-
dc.identifier.scopusid2-s2.0-85132998864-
dc.type.rimsCONF-
dc.citation.publicationname10th International Conference on Learning Representations, ICLR 2022-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationVirtual-
dc.contributor.localauthorHwang, Sung Ju-
dc.contributor.localauthorLim, Joseph Jaewhan-
dc.contributor.nonIdAuthorSun, Shao-Hua-
dc.contributor.nonIdAuthorPertsch, Karl-
Appears in Collection
AI-Conference Papers(학술대회논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0