STRUCTURE-AWARE TRANSFORMER POLICY FOR INHOMOGENEOUS MULTI-TASK REINFORCEMENT LEARNING

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 96
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorHong, Sunghoonko
dc.contributor.authorYoon, Deunsolko
dc.contributor.authorKim, Kee-Eungko
dc.date.accessioned2023-09-15T05:00:21Z-
dc.date.available2023-09-15T05:00:21Z-
dc.date.created2023-09-15-
dc.date.issued2022-04-
dc.identifier.citation10th International Conference on Learning Representations, ICLR 2022-
dc.identifier.urihttp://hdl.handle.net/10203/312663-
dc.description.abstractModular Reinforcement Learning, where the agent is assumed to be morphologically structured as a graph, for example composed of limbs and joints, aims to learn a policy that is transferable to a structurally similar but different agent. Compared to traditional Multi-Task Reinforcement Learning, this promising approach allows us to cope with inhomogeneous tasks where the state and action space dimensions differ across tasks. Graph Neural Networks are a natural model for representing the pertinent policies, but a recent work has shown that their multi-hop message passing mechanism is not ideal for conveying important information to other modules and thus a transformer model without morphological information was proposed. In this work, we argue that the morphological information is still very useful and propose a transformer policy model that effectively encodes such information. Specifically, we encode the morphological information in terms of the traversal-based positional embedding and the graph-based relational embedding. We empirically show that the morphological information is crucial for modular reinforcement learning, substantially outperforming prior state-of-the-art methods on multi-task learning as well as transfer learning settings with different state and action space dimensions.-
dc.languageEnglish-
dc.publisherInternational Conference on Learning Representations, ICLR-
dc.titleSTRUCTURE-AWARE TRANSFORMER POLICY FOR INHOMOGENEOUS MULTI-TASK REINFORCEMENT LEARNING-
dc.typeConference-
dc.identifier.scopusid2-s2.0-85143666294-
dc.type.rimsCONF-
dc.citation.publicationname10th International Conference on Learning Representations, ICLR 2022-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationVirtual-
dc.contributor.localauthorKim, Kee-Eung-
dc.contributor.nonIdAuthorHong, Sunghoon-
dc.contributor.nonIdAuthorYoon, Deunsol-
Appears in Collection
AI-Conference Papers(학술대회논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0