Direct Visual SLAM using Sparse Depth for Camera-LiDAR System

Cited 57 time in webofscience Cited 0 time in scopus
  • Hit : 300
  • Download : 0
This paper describes a framework for direct visual simultaneous localization and mapping (SLAM) combining a monocular camera with sparse depth information from Light Detection and Ranging (LiDAR). To ensure real-time performance while maintaining high accuracy in motion estimation, we present (i) a sliding window-based tracking method, (ii) strict pose marginalization for accurate pose-graph SLAM and (iii) depth-integrated frame matching for largescale mapping. Unlike conventional feature-based visual and LiDAR mapping, the proposed approach is direct, eliminating the visual feature in the objective function. We evaluated results using our portable camera-LiDAR system as well as KITTI odometry benchmark datasets. The experimental results prove that the characteristics of two complementary sensors are very effective in improving real-time performance and accuracy. Via validation, we achieved low drift error of 0.98% in the KITTI benchmark including various environments such as a highway and residential areas.
Publisher
IEEE Robotics and Automation Society
Issue Date
2018-05-23
Language
English
Citation

IEEE International Conference on Robotics and Automation (ICRA), pp.5144 - 5151

DOI
10.1109/ICRA.2018.8461102
URI
http://hdl.handle.net/10203/244145
Appears in Collection
CE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 57 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0