Leveraging Contextual Information for Monocular Depth Estimation

Cited 4 time in webofscience Cited 5 time in scopus
  • Hit : 399
  • Download : 366
Humans strongly rely on visual cues to understand scenes such as segmenting, detecting objects, or measuring the distance from nearby objects. Recent studies suggest that deep neural networks can take advantage of contextual representation for the estimation of a depth map for a given image. Therefore, focusing on the scene context can be beneficial for successful depth estimation. In this study, a novel network architecture is proposed to improve the performance by leveraging the contextual information for monocular depth estimation. We introduce a depth prediction network with the proposed attentive skip connection and a global context module, to obtain meaningful semantic features and enhance the performance of the model. Furthermore, our model is validated through several experiments on the KITTI and NYU Depth V2 datasets. The experimental results demonstrate the effectiveness of the proposed network, which achieves a state-of-the-art monocular depth estimation performance while maintaining a high running speed.
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Issue Date
2020-08
Language
English
Article Type
Article
Citation

IEEE ACCESS, v.8, pp.147808 - 147817

ISSN
2169-3536
DOI
10.1109/ACCESS.2020.3016008
URI
http://hdl.handle.net/10203/276077
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
000562067200001.pdf(3.11 MB)Download
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 4 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0