High-Fidelity Depth Upsampling Using the Self-Learning Framework

Cited 2 time in webofscience Cited 1 time in scopus
  • Hit : 548
  • Download : 162
DC FieldValueLanguage
dc.contributor.authorShim, Inwookko
dc.contributor.authorOh, Tae-Hyunko
dc.contributor.authorKweon, In Soko
dc.date.accessioned2019-03-19T01:27:14Z-
dc.date.available2019-03-19T01:27:14Z-
dc.date.created2019-03-04-
dc.date.created2019-03-04-
dc.date.issued2019-01-
dc.identifier.citationSENSORS, v.19, no.1-
dc.identifier.issn1424-8220-
dc.identifier.urihttp://hdl.handle.net/10203/251652-
dc.description.abstractThis paper presents a depth upsampling method that produces a high-fidelity dense depth map using a high-resolution RGB image and LiDAR sensor data. Our proposed method explicitly handles depth outliers and computes a depth upsampling with confidence information. Our key idea is the self-learning framework, which automatically learns to estimate the reliability of the upsampled depth map without human-labeled annotation. Thereby, our proposed method can produce a clear and high-fidelity dense depth map that preserves the shape of object structures well, which can be favored by subsequent algorithms for follow-up tasks. We qualitatively and quantitatively evaluate our proposed method by comparing other competing methods on the well-known Middlebury 2014 and KITTIbenchmark datasets. We demonstrate that our method generates accurate depth maps with smaller errors favorable against other methods while preserving a larger number of valid points, as we also show that our approach can be seamlessly applied to improve the quality of depth maps from other depth generation algorithms such as stereo matching and further discuss potential applications and limitations. Compared to previous work, our proposed method has similar depth errors on average, while retaining at least 3% more valid depth points.-
dc.languageEnglish-
dc.publisherMDPI-
dc.titleHigh-Fidelity Depth Upsampling Using the Self-Learning Framework-
dc.typeArticle-
dc.identifier.wosid000458574600081-
dc.identifier.scopusid2-s2.0-85059247595-
dc.type.rimsART-
dc.citation.volume19-
dc.citation.issue1-
dc.citation.publicationnameSENSORS-
dc.identifier.doi10.3390/s19010081-
dc.contributor.localauthorKweon, In So-
dc.contributor.nonIdAuthorShim, Inwook-
dc.contributor.nonIdAuthorOh, Tae-Hyun-
dc.description.isOpenAccessY-
dc.type.journalArticleArticle-
dc.subject.keywordAuthordepth upsampling-
dc.subject.keywordAuthordepth filtering-
dc.subject.keywordAuthorLiDAR-
dc.subject.keywordAuthorself-learning-
dc.subject.keywordAuthorself-supervised learning-
dc.subject.keywordPlusROBOTICS-
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 2 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0