SLAM(simultaneous localization and mapping) is an important problem, because of the rapid market growth of autonomous navigation robots for living, working, military, and so on.
Although many researchers studied for the problem, there are few commercial robots with the capability of autonomous navigation due to a number of challenges.
The error accumulation problem may be solved using closed loops or global sensors, and dynamic environment may be handled by adopting object detection techniques.
However, the problem of `lack of features’ is difficult to solve, because most existing pose estimation algorithms assume that there are enough features for further processes.
In this dissertation, we mainly address the issues of SLAM in featureless environments, and of geometric calibration among 2D LiDARs and cameras required for the proposed methodologies.
First, we present a robust 2D SLAM framework using a 2D LiDAR and two cameras.
The edges and corners play an important role as `features' in two-dimensional pose estimation using a 2D LiDAR. Therefore, the method fails if the robot is in featureless environments such as long corridors.
We propose a solution to the motion ambiguity of 2D pose estimation in featureless environments using images and vertical wall assumption.
It is applied to the autonomous homing of robots, with efficient strategies for mapping and returning.
Second, we present a practical means of extrinsic calibration between a camera and a 2D LiDAR, without overlap. In order to calibrate a non-overlapping camera-LiDAR system using conventional ones, it is necessary to attach an extra sensor, such as a camera or a 3D LiDAR. We propose two means of calibrating a non-overlapping camera-LiDAR system directly without an extra sensor. For each method, the initial solution of the relative pose between the camera and the 2D LiDAR is computed by adopting a reasonable assumption about geometric structures. This is then refined via non-linear optimization, even if the assumption is not met perfectly.
Third, we introduce a new 3D SLAM framework using 2D LiDARs.
Image features used by most 3D SLAM algorithms are robust in general cases, however, the algorithms do not work when illumination go out or the number of image features is insufficient.
Instead of image features, we use geometric structures scanned by 2D LiDARs.
We present a new method to estimate the pose of a lidar system using a single or multiple lidars, without any additional sensor.
We scan three known planes in the target scene to obtain three linear measurements. Using three line-plane correspondences, the proposed algorithm provides a closed-form solution of sensor-to-world transformation.
Finally, we propose a novel method of extrinsic calibration between two 2D LiDARs without any additional sensor or artificial landmark.
Conventional methods use additional image sensors or artificial landmarks at known locations, because it is hard to find correspondences among scan data of 2D LiDARs moving in a 3D space.
By scanning two orthogonal planes, we utilize the coplanarity of the scan points on each plane and the orthogonality of the planes.
We also derive two degenerated cases, one related to plane poses and the other caused by the relative pose between two LiDARs.