This thesis addresses the issues of the vision-based simultaneous localization, mapping and moving objects tracking problems under dynamic environments.
SLAM is of prime importance for autonomous robot navigation.
The robot typically starts at an unknown location with no priori knowledge of landmark locations.
From relative observations of landmarks, it simultaneously computes an estimate of robot locations and an estimate of landmark locations.
While continuing in motion, the robot builds a complete map of landmarks and uses these to provide continuous estimates of the robot location.
Visual SLAM uses only cameras to build up a map of the environment and to estimate the robot location.
For over a last decade, many visual SLAM approaches using various types of camera (e.g. monocular camera, stereo camera and omni-directional camera) have been shown remarkable results in both indoor and outdoor environments.
However, since general visual SLAM approaches have assumed that the environment is static, the critical problems of visual SLAM under dynamic environments still remain as follows:
First, motion estimation becomes inaccurate because of measurement errors.
Second, motion estimation becomes also inaccurate in dynamic environment because image features of moving objects yield wrong motion estimation.
And third, data-association (finding correspondences between map landmarks and robot sensor measurements) becomes challenging because there are lots of map landmarks to store in robot`s database.
For the first problem, we propose robust ego-motion estimation by using the data-driven MCMC sampling method.
Visual odometry prior has been widely used for the process model involved in the SLAM formulation and it improves the
SLAM performance. However, modeling the uncertainties of incremental motions estimated by visual odometry is especially difficult under challenging conditions, such as erratic motion.
For the particle-based model representation, it can represent t...