Simultaneous Localization and Mapping (SLAM) is the problem for estimating a robot’s location while the robot moves around and building a map in an unknown environment. It is one of the most essential techniques for autonomous navigation of mobile robots. Intelligent service robots have to adapt to the new surroundings in order to provide proper services.
In SLAM frameworks, measuring the distance and the angle between the robot and its surrounding environment is the most important information. In general, various sensors, such as infrared, ultrasonic, and laser ones, are used to measure the depth information. Also, vision sensors are utilized to get rich information about the environment. Recently, Red Green Blue - Depth (RGB-D) sensors, which can acquire a depth image and a color image simultaneously, are becoming widely used. It can obtain not only the color image for visual information but also the depth image representing the distance of every pixel in the image.
Visual SLAM means the SLAM framework which gets the landmark information from color images. In general, when a mobile robot explores unknown environments, the following processes are iterated continuously for autonomous navigation. First, a robot estimates its position and extracts landmarks to build up a map of the environment. After that, the landmark locations are defined based on the robot’s own pose. Last, the landmark position information refines the robot’s location more accurately.
In this dissertation, we present an RGB-D sensor-based visual SLAM for a mobile robot in indoor environments. We focus on two issues. The feature detection and matching for the visual SLAM is the one and the dynamic landmark management methodology to improve the performance of the proposed framework is the other.
As the first issue, we propose an adaptive threshold approach for the visual SLAM framework which defines landmarks with Speeded Up Robust Features (SURF). In order to utilize the SURF detection and ma...