REMAX: Relational Representation for Multi-Agent Exploration

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 112
  • Download : 0
a sparse reward is generally difficult because numerous combinations of interactions among agents induce a certain outcome (i.e., success or failure). Earlier studies have tried to resolve this issue by employing an intrinsic reward to induce interactions that are helpful for learning an effective policy. However, this approach requires extensive prior knowledge for designing an intrinsic reward. To train the MARL model effectively without designing the intrinsic reward, we propose a learning-based exploration strategy to generate the initial states of a game. The proposed method adopts a variational graph autoencoder to represent a game state such that (1) the state can be compactly encoded to a latent representation by considering relationships among agents, and (2) the latent representation can be used as an effective input for a coupled surrogate model to predict an exploration score. The proposed method then finds new latent representations that maximize the exploration scores and decodes these representations to generate initial states from which the MARL model starts training in the game and thus experiences novel and rewardable states. We demonstrate that our method improves the training and performance of the MARL model more than the existing exploration methods.
Publisher
International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)
Issue Date
2022-05-09
Language
English
Citation

Autonomous Agents and Multiagent Systems (AAMAS-20221), pp.1137 - 1145

ISSN
1548-8403
DOI
10.5555/3535850
URI
http://hdl.handle.net/10203/298144
Appears in Collection
IE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0