Multi-sensor systems for robust visual perception in traffic environment주행 환경에서의 강인한 시각적 인지를 위한 다중 센서 시스템

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 153
  • Download : 0
In this dissertation, we propose multi-sensor systems for robust visual perception in traffic environment and various sensor control and visual perception algorithms. We first propose a multi-camera sensor system and its control algorithms for an active high-resolution object image acquisition. It is important to acquire high-resolution images of the surrounding environment for robust and accurate perception in the traffic environment. However, it is difficult to obtain high-resolution object images because relative positions between a sensor system and objects are continuously changing. To tackle the problem, we propose a multi-camera sensor system and active camera viewpoint control algorithms. Secondly, we propose a complementary sensor system to estimate accurate dense depth information of a scene. Depth information is one of the most important information for various visual perception algorithms. Conventional algorithms utilize depth sensors such as a LiDAR sensor. Although depth sensors provide accurate depth measurements, the amount of information is highly sparse. Sparse depth measurements provide partial information of the entire depth information of a scene, therefore, it is difficult to estimate accurate depth values of regions without depth measurements. To estimate dense depth information from sparse measurements, we can combine information from an RGB camera and a LiDAR sensor to combine complementary information from those sensors. Therefore, we construct an RGB-LiDAR sensor system for accurate dense depth estimation. In addition, we propose a deep learning-based non-local spatial propagation network for depth completion. Thirdly, we propose a multi-modal sensor system for robust visual perception in a changing environment. In the real-world, the surrounding environment keeps changing due to time, location, and weather variations. Therefore, we need to combine information from various kinds of sensors to deal with dynamic environments. For this purpose, we propose a multi-modal sensor system that consists of RGB cameras, NIR cameras, LiDARs, IMUs, and a GNSS sensor. Moreover, a robust multi-modal depth estimation algorithm with a geometry-aware adaptive cost volume fusion that can deal with changing environments such as illumination, weather, and time variation is proposed. The systems and algorithms proposed in this dissertation are validated and compared with previous algorithms by quantitative and qualitative experiments.
Advisors
Kweon, In Soresearcher권인소researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2021
Identifier
325007
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 전기및전자공학부, 2021.2,[v, 90 p. :]

Keywords

Multi-Sensor▼aHigh-Resolution▼aDepth Estimation▼aCamera Viewpoint Control▼aNon-Local Spatial Propagation▼aChanging Environment▼aSensor Fusion; 다중 센서▼a고화질 정보▼a거리 정보▼a촬영 각도 제어▼a비 국소적 전파▼a환경 변화▼a센서 융합

URI
http://hdl.handle.net/10203/295604
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=956651&flag=dissertation
Appears in Collection
EE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0