DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Chang, Dong Eui | - |
dc.contributor.advisor | 장동의 | - |
dc.contributor.author | Gao, Mengyi | - |
dc.date.accessioned | 2023-06-26T19:34:38Z | - |
dc.date.available | 2023-06-26T19:34:38Z | - |
dc.date.issued | 2022 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1008375&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/310018 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2022.8,[iii, 28 p. :] | - |
dc.description.abstract | Target recognition, behavior detection, search and rescue missions, and other surveillance and monitoring tasks can all benefit from unmanned Aerial Vehicles (UAVs), or called drones. Using deep reinforcement learning (DRL) to implement some autonomous tasks of robotics has become an important research field with potential growth in the position of robotics movement and computer vision, as it has gradually developed into one of the most popular research fields in learning-based and model-based control, with the characteristics of frequent interaction with the environment. In this research, based on the simulator tool provided in [1], we utilize deep reinforcement learning to develop both single-agent and multi-agents that can accomplish autonomous drone surveillance tasks in a known indoor environment. We combine the benefits of both visual and obstacle information to boost efficacy while ensuring low time consumption. We also employ segmentation mask information with single-channel pixels to perform the detection task to reduce time and increase identification accuracy instead of using RGB or depth images. Finally, we devise a separate reinforcement learning training and testing technique that may both enhance training efficiency and ensure task completion: training the DRL policy without drone dynamics and considering discrete drone actions first, and then accepting dynamics to continuously control the drone in test. This method also creates a new field for sim-to-real transfer. Our experimental results show that the trained agents can detect all targets at a relatively fast speed while maintaining a high level of security, and the patrol completion rate is more than 98% in both single-agent and multi-agents task. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | Drone Surveillance▼aDetection▼aDeep Reinforcement Learning▼aVision-Based▼aGrid Map | - |
dc.subject | 무인기 감시▼a검측▼a심도 강화 학습▼a시각 기반▼a그리드 맵 | - |
dc.title | Autonomous drone surveillance in a known environment using reinforcement learning | - |
dc.title.alternative | 강화학습을 이용한 탐지영역에서의 자율 드론 정찰 기술 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :전기및전자공학부, | - |
dc.contributor.alternativeauthor | 고몽이 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.