Generating Multi-agent Patrol Areas by Reinforcement Learning

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 46
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorPark, Bumjinko
dc.contributor.authorKang, Cheongwoongko
dc.contributor.authorChoi, Jaesikko
dc.date.accessioned2023-09-08T07:03:12Z-
dc.date.available2023-09-08T07:03:12Z-
dc.date.created2023-09-08-
dc.date.issued2021-10-12-
dc.identifier.citation2021 21st International Conference on Control, Automation and Systems (ICCAS), pp.104 - 107-
dc.identifier.issn2093-7121-
dc.identifier.urihttp://hdl.handle.net/10203/312362-
dc.description.abstractIn this paper, we designed reinforcement learning environment for distributed patrolling agents. In the partially observable environment, the agents take actions for each one's interest and the non-stationary problem in multi-agent setting encourages the agents not to invade other agent's region. In our environment, the patrolling routes for the agents are generated implicitly. We suggested different types of the environments and evaluated with different initial positions of the agents. We also show how the reinforcement learning algorithm changes the distribution of agents as training time goes.-
dc.languageEnglish-
dc.publisherIEEE-
dc.titleGenerating Multi-agent Patrol Areas by Reinforcement Learning-
dc.typeConference-
dc.identifier.wosid000750950700014-
dc.type.rimsCONF-
dc.citation.beginningpage104-
dc.citation.endingpage107-
dc.citation.publicationname2021 21st International Conference on Control, Automation and Systems (ICCAS)-
dc.identifier.conferencecountryKO-
dc.identifier.conferencelocationJeju Island-
dc.identifier.doi10.23919/iccas52745.2021.9650047-
dc.contributor.localauthorChoi, Jaesik-
Appears in Collection
AI-Conference Papers(학술대회논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0