Robust SLAM and sensor calibration frameworks for robots using 2D LiDARs and cameras2차원 라이다와 카메라를 사용하는 로봇을 위한 강인한 위치 추정 및 센서 정합법

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 1529
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorKweon, In So-
dc.contributor.advisor권인소-
dc.contributor.authorChoi, Dong-Geol-
dc.contributor.author최동걸-
dc.date.accessioned2017-03-28T07:13:59Z-
dc.date.available2017-03-28T07:13:59Z-
dc.date.issued2016-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=648107&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/221107-
dc.description학위논문(박사) - 한국과학기술원 : 로봇공학학제전공, 2016.2 ,[vii, 97 p. :]-
dc.description.abstractSLAM(simultaneous localization and mapping) is an important problem, because of the rapid market growth of autonomous navigation robots for living, working, military, and so on. Although many researchers studied for the problem, there are few commercial robots with the capability of autonomous navigation due to a number of challenges. The error accumulation problem may be solved using closed loops or global sensors, and dynamic environment may be handled by adopting object detection techniques. However, the problem of `lack of features’ is difficult to solve, because most existing pose estimation algorithms assume that there are enough features for further processes. In this dissertation, we mainly address the issues of SLAM in featureless environments, and of geometric calibration among 2D LiDARs and cameras required for the proposed methodologies. First, we present a robust 2D SLAM framework using a 2D LiDAR and two cameras. The edges and corners play an important role as `features' in two-dimensional pose estimation using a 2D LiDAR. Therefore, the method fails if the robot is in featureless environments such as long corridors. We propose a solution to the motion ambiguity of 2D pose estimation in featureless environments using images and vertical wall assumption. It is applied to the autonomous homing of robots, with efficient strategies for mapping and returning. Second, we present a practical means of extrinsic calibration between a camera and a 2D LiDAR, without overlap. In order to calibrate a non-overlapping camera-LiDAR system using conventional ones, it is necessary to attach an extra sensor, such as a camera or a 3D LiDAR. We propose two means of calibrating a non-overlapping camera-LiDAR system directly without an extra sensor. For each method, the initial solution of the relative pose between the camera and the 2D LiDAR is computed by adopting a reasonable assumption about geometric structures. This is then refined via non-linear optimization, even if the assumption is not met perfectly. Third, we introduce a new 3D SLAM framework using 2D LiDARs. Image features used by most 3D SLAM algorithms are robust in general cases, however, the algorithms do not work when illumination go out or the number of image features is insufficient. Instead of image features, we use geometric structures scanned by 2D LiDARs. We present a new method to estimate the pose of a lidar system using a single or multiple lidars, without any additional sensor. We scan three known planes in the target scene to obtain three linear measurements. Using three line-plane correspondences, the proposed algorithm provides a closed-form solution of sensor-to-world transformation. Finally, we propose a novel method of extrinsic calibration between two 2D LiDARs without any additional sensor or artificial landmark. Conventional methods use additional image sensors or artificial landmarks at known locations, because it is hard to find correspondences among scan data of 2D LiDARs moving in a 3D space. By scanning two orthogonal planes, we utilize the coplanarity of the scan points on each plane and the orthogonality of the planes. We also derive two degenerated cases, one related to plane poses and the other caused by the relative pose between two LiDARs.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectSLAM-
dc.subjectsensor calibration-
dc.subject2D LiDAR-
dc.subjectcamera-
dc.subjectsensor fusion-
dc.subject위치추정-
dc.subject센서 보정-
dc.subject2차원 라이다-
dc.subject카메라-
dc.subject센서 융합-
dc.titleRobust SLAM and sensor calibration frameworks for robots using 2D LiDARs and cameras-
dc.title.alternative2차원 라이다와 카메라를 사용하는 로봇을 위한 강인한 위치 추정 및 센서 정합법-
dc.typeThesis(Ph.D)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :로봇공학학제전공,-
Appears in Collection
RE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0