Visual perception for autonomous driving자율주행을 위한 시각 인지 기술

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 846
  • Download : 0
Sensing, mapping, and driving policy techniques are essential for self-driving cars and robots to travel to their destinations. These are distinguished by techniques for action control and perception, and visual perception tech- niques must be preceded for action control. In this dissertation, sensing and mapping technology, which is closely related to visual cognition technology, we have overcome the limitation of the physical sensor properties by using the data-driven deep-learning method to secure sensing for day and night and secure fast and accurate mapping technology. The focus of each research focus is on vision-based sensing technology that can be used at all times of the day and night and mapping technology for fast and light localization. First, we proposed a resolution and quality enhancement technology for thermal camera. However, since the blur distortion of the thermal image is relatively high as compared with the color camera, and the resolution supported by the limit of the detector price is relatively small. In this study, we proposed an up-sampling technique to create a high-resolution image with a low-resolution image and a technique to improve the image quality by amplifying details such as edge or texture corresponding to a high frequency component in a high-resolution image. Secondly, we proposed a technique to acquire dense depth information in the single image regardless of day and night. In general, stereo cameras and Lidar/Radar sensors are used to obtain depth information in the indoor or outdoor. However, in the case of these Lidar and Radar sensors, it is difficult to provide dense depth information due to the physical properties. Stereo cameras provide dense depth information during the day, but it is difficult to obtain reliable depth due to lack of light at night. In order to overcome the physical limitation of existing sensors, this work proposed learning based dense depth sensor, which is not depending on the lighting condition. Third, we proposed a local invariant feature that is lightweight, efficient, and well performance. The local invariant feature is necessary to find a geometric relationship between objects in the image. In particular, it is essential to design a local invariant feature that is lightweight, efficient for real-time object recognition with low- resource devices such as robots or smartphones. This study combines the property that the order of brightness is robust to various in-image deformation (size/ rotation change, noise/blur distortion, etc.) and binary pattern test which is mainly used for the binary descriptor. Fourth, we propose an efficient clustering algorithm that can update the map online when the changes of infrastructure are found while driving. Maps such as ”Google Street View and Here HD-Map” which are used to localize autonomous vehicles are pre-recorded and managed in the Internet cloud. However, these maps require partial updates, due to the new building or the new road over time. In the self-driving car to update the map and own position simultaneously, the development of efficient algorithm is an important research topic. In this work, we propose an efficient clustering algorithm for updating the map online.
Advisors
Kweon, In Soresearcher권인소researcher
Description
한국과학기술원 :로봇공학학제전공,
Publisher
한국과학기술원
Issue Date
2018
Identifier
325007
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 로봇공학학제전공, 2018.2,[xi, 115 p. :]

Keywords

Autonomous driving▼avisual perception▼aall-day vision▼asensing▼amapping▼aimage enhancement▼aimage enhancement▼alocal invariant feature▼aefficient clustering; 자율주행▼a시각인지▼a올데이비전▼a센싱▼a매핑▼a영상개선▼a깊이추정▼a불변특징량▼a빠른군집화

URI
http://hdl.handle.net/10203/264611
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=734279&flag=dissertation
Appears in Collection
RE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0