DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Park, Jinkyoo | - |
dc.contributor.advisor | 박진규 | - |
dc.contributor.author | Lee, Kanghoon | - |
dc.date.accessioned | 2023-06-23T19:31:10Z | - |
dc.date.available | 2023-06-23T19:31:10Z | - |
dc.date.issued | 2022 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=997794&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/308788 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 산업및시스템공학과, 2022.2,[iv, 26 p. :] | - |
dc.description.abstract | In this paper, we study an algorithm to derive the decentralized and cooperative control strategy for the unmanned surface vehicles (USVs) swarm using graph-centric multi-agent reinforcement learning (MARL). Our model first expresses the mission situation using a graph considering the various sensor ranges. Next, each USV agent encodes observed information into localized embedding and then derives coordinated action through communication with the surrounding agent. Also, We make each agent's policy to maximize the team reward for deriving a cooperative policy. Using the USV combat simulator, we have shown that it outperforms conventional heuristic-based defensive strategies in the training scenarios. In addition, empirically, we showed that proposed model could derive a scalable control strategy through experiments in the unseen scenario. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.title | End-to-end control of USV swarm using graph-centric multi-agent reinforcement learning | - |
dc.title.alternative | 그래프 중심 다중 에이전트 강화 학습을 이용한 무인수상정 군집 제어 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :산업및시스템공학과, | - |
dc.contributor.alternativeauthor | 이강훈 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.