(The) architecture of decentralized multi-agent reinforcement learning with communication통신을 활용한 분산 다중 에이전트 강화학습 구조

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 318
  • Download : 0
We consider the problem of cooperative multi-agent reinforcement learning in partially observable environment. In particular, it is essential to coordinate between agents in partially observable environment. In this paper, we introduce the compressed feature vectors for communicate between agents and how to design the decentralized network using them. We also introduce and apply the group dropout layer to train the ensemble of sub-network efficiently, and evaluate the proposed network on pursuit, which is a standard task in multi-agent systems.
Advisors
Sung, Youngchulresearcher성영철researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2018
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2018.2,[iii, 24 p. :]

Keywords

Multi-Agent Reinforcement Learning▼aCompressed feature vector▼aGroup dropout; 다중 에이전트 강화학습▼a축약된 특직 벡터▼a그룹 드롭아웃

URI
http://hdl.handle.net/10203/266814
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=733996&flag=dissertation
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0