Adaptive Cost Volume Fusion Network for Multi-Modal Depth Estimation in Changing Environments

Cited 1 time in webofscience Cited 0 time in scopus
  • Hit : 325
  • Download : 0
In this letter, we propose an adaptive cost volume fusion algorithm for multi-modal depth estimation in changing environments. Our method takes measurements from multi-modal sensors to exploit their complementary characteristics and generates depth cues from each modality in the form of adaptive cost volumes using deep neural networks. The proposed adaptive cost volume considers sensor configurations and computational costs to resolve an imbalanced and redundant depth bases problem of conventional cost volumes. We further extend its role to a generalized depth representation and propose a geometry-aware cost fusion algorithm. Our unified and geometrically consistent depth representation leads to an accurate and efficient multi-modal sensor fusion, which is crucial for robustness to changing environments. To validate the proposed framework, we introduce a new multi-modal depth in changing environments (MMDCE) dataset. The dataset was collected by our own vehicular system with RGB, NIR, and LiDAR sensors in changing environments. Experimental results demonstrate that our method is robust, accurate, and reliable in changing environments. Our codes and dataset are available at our project page.(1)
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Issue Date
2022-04
Language
English
Article Type
Article
Citation

IEEE ROBOTICS AND AUTOMATION LETTERS, v.7, no.2, pp.5095 - 5102

ISSN
2377-3766
DOI
10.1109/LRA.2022.3150868
URI
http://hdl.handle.net/10203/292549
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 1 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0