Robust 3-D Visual SLAM in a Large-scale Environment

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 321
  • Download : 1073
Motion estimation approaches enable the robust prediction of successive camera poses when a camera undergoes erratic motion. It is especially difficult to make robust predictions under such conditions when using a constant-velocity model. However, motion estimation itself inevitably involves pose errors that result in the production of an inconsistent map. To solve this problem, we propose a novel 3D visual SLAM approach in which both motion estimation and stochastic filtering are performed; in the proposed method, visual odometry and Rao-blackwellized particle filtering are combined. First, to ensure that the process and the measurement noise are independent (they are actually dependent in the case of a single sensor), we simply divide observations (i.e., image features) into two categories, common features observed in the consecutive key-frame images and new features detected in the current key-frame image. In addition, we propose a key-frame SLAM to reduce error accumulation with a data-driven proposal distribution. We demonstrate the accuracy of the proposed method in terms of the consistency of the global map.
Issue Date
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
Robust 3-D Visual SLAM in a Large-scale Environment.pdf(1.3 MB)Download


  • mendeley


rss_1.0 rss_2.0 atom_1.0