Semantic scene recognition in adverse visual conditions for autonomous systems자율 시스템을 위한 불리한 시각 조건에서의 의미론적 장면 인식

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 125
  • Download : 0
The intelligent mobile robot is expected to provide convenience and improve the quality of human life by performing specific tasks with users based on information obtained by recognizing the surrounding environment. To this end, intelligent mobile robots first need the ability to recognize the surroundings and autonomously drive to the desired destination. To safely drive to the desired destination without collisions, the following functionalities are required. A mobile robot must 1) be able to recognize the type of objects placed around it, 2) predict the distance to the object, and 3) recognize unexpected obstacles that are not used during training. In addition, 4) it should be able to perform the preceding requirements in real-time based on the fast operation speed. To solve these problems, in this paper, we propose a network that simultaneously perform multiple tasks such as semantic segmentation, stereo disparity estimation, and obstacle detection at high computational speed. In particular, when operating a mobile robot in an outdoor environment, it is desirable to detect unexpected road hazards reliably in real-time, especially under varying adverse conditions (e.g., changing weather and time of day). However, existing road driving dataset provide large-scale images acquired in either normal or adverse scenarios only, and often do not contain the road obstacles captured in the same visual domain as for the other classes. To address this, we introduce a new dataset called AVOID, the Adverse Visual Conditions Dataset for real-time obstacle detection collected in a simulated environment. AVOID consists of a large set of unexpected road obstacles located along each path captured under various weather and time conditions. Each image is coupled with the corresponding semantic and depth maps, raw and semantic LiDAR data, and waypoints, thereby supporting most visual perception tasks. We benchmark the results on high-performing real-time networks for the obstacle detection task, and also propose and conduct ablation studies using a comprehensive multi-task network for semantic segmentation, depth and waypoint prediction tasks. Finally, through experiments, our network was confirmed that exhibit the best performance.
Advisors
Kim, Jong-Hwanresearcher김종환researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2023
Identifier
325007
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 전기및전자공학부, 2023.2,[vi, 58 p. :]

Keywords

Object recognition▼aSemantic segmentation▼aDepth estimation▼aObstacle detection▼aPerception in adverse condition▼aMulti-task learning▼aIntelligent mobile agent; 물체인식▼a의미론적 물체 분할▼a깊이 추정▼a장애물 검출▼a불리한 시각조건에서의 물체 인식▼a다중 작업 학습▼a지능형 모바일 에이전트

URI
http://hdl.handle.net/10203/309085
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1030551&flag=dissertation
Appears in Collection
EE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0