High-quality visual sensing via sensor fusion for intelligent robotic systems센서 융합을 이용한 지능 로봇의 고품질 시각 인지 방법

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 1093
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorKweon, In So-
dc.contributor.advisor권인소-
dc.contributor.authorShim, Inwook-
dc.contributor.author심인욱-
dc.date.accessioned2018-05-23T19:33:52Z-
dc.date.available2018-05-23T19:33:52Z-
dc.date.issued2017-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=675696&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/241796-
dc.description학위논문(박사) - 한국과학기술원 : 미래자동차학제전공, 2017.2,[vi, 99 p. :]-
dc.description.abstractRobot technologies are expected to significantly improve human safety and convenience by substituting for human workers. Many robot systems such as autonomous vehicles and humanoid robots were implemented and disclosed to the public through various press media and famous events such as DARPA challenges. The robot systems are expected to carry out diverse tasks in the human world, and sensing and recognizing surrounding environments is one of the fundamental abilities of the robots, and diverse sensors have been developed for "robot sense". Among them, image and depth sensors are widely used and provide a most useful information for surrounding environments, and there has been a large amount research in the field of computer and robot vision using these sensors. However, when trying to exploit the algorithms to recognize the surrounding environments, we often suffer from lower performance of the algorithms than the performance on their report under outdoor environments. In this dissertation, to prevent the performance degradation of the algorithms we propose robust visual sensing methods using sensor fusion approaches to obtain high-quality visual information for robotic sensing with actual implementation cases. First, we present a new method to automatically adjust camera exposure for capturing high-quality image by exploiting relationship between gradient information and camera exposures. Since most of robot vision algorithms heavily rely on low-level image features, we pay attention to the gradient information in order to determine a proper exposure level and make a camera capture important image features robust to illumination conditions. Additionally, we introduce a new control algorithm to achieve both the brightness consistency between adjacent cameras and the proper exposure level of each camera for multi-camera systems. We implement our system with off-the-shelf machine vision cameras and demonstrate the effectiveness of our algorithms on several practical applications such as pedestrian detection, visual odometry, surround-view imaging, panoramic imaging, and stereo matching. Second, we present a high-quality depth generation method which propagates iteratively unstructured sparse depth points by fusing sharp edge boundaries of the depth data and the corresponding image. Our depth processing method explicitly handles noisy or unreliable depth observations and refines depth map using image and depth guidance scheme, and filter out unreliable depth points using confidence map. The confidence map is converted into binary mask using proposed self-learning framework which automatically generates labeled training dataset. We show the performance of our depth generation method quantitatively and qualitatively on several synthetic and real-world datasets. Finally, we present intelligent robotic systems that I participated in development, which manages and fuses information obtained from various detection algorithms. One of them is KAIST autonomous driving system, named EURECAR, and the other is KAIST humanoid system, named DRC-HUBO+. The two intelligent robotic system integrates various vision based detection algorithms by modular network architecture with proposed high-quality visual sensing system. EURECAR system was evaluated on challenging real track with a set of traffic signals at Hyundai Autonomous Vehicle Competition(AVC) 2012 and showed good performance, and DRC-HUBO+ also showed its performance at the DARPA Robotics Challenge(DRC) Finals 2015. The robot successfully carried out all tasks, and we got first place with full score.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectrobot vision-
dc.subjectimage acquisition-
dc.subjectdepth processing-
dc.subjectreal-time computer vision-
dc.subjectself-learning-
dc.subjectautomated robotic system-
dc.subjectintelligent robotic system-
dc.subject로봇 비전-
dc.subject영상 정보 처리-
dc.subject깊이 정보 처리-
dc.subject실시간 컴퓨터 비전 기술-
dc.subject자가 학습-
dc.subject자율 로봇 시스템-
dc.subject지능형 로봇 시스템-
dc.titleHigh-quality visual sensing via sensor fusion for intelligent robotic systems-
dc.title.alternative센서 융합을 이용한 지능 로봇의 고품질 시각 인지 방법-
dc.typeThesis(Ph.D)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :미래자동차학제전공,-
Appears in Collection
PD-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0