Maximizing Self-Supervision From Thermal Image for Effective Self-Supervised Learning of Depth and Ego-Motion

Cited 6 time in webofscience Cited 0 time in scopus
  • Hit : 231
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorShin, Ukcheolko
dc.contributor.authorLee, Kyunghyunko
dc.contributor.authorLee, Byeong-Ukko
dc.contributor.authorKweon, In Soko
dc.date.accessioned2022-08-29T09:00:24Z-
dc.date.available2022-08-29T09:00:24Z-
dc.date.created2022-08-29-
dc.date.created2022-08-29-
dc.date.created2022-08-29-
dc.date.issued2022-07-
dc.identifier.citationIEEE ROBOTICS AND AUTOMATION LETTERS, v.7, no.3, pp.7771 - 7778-
dc.identifier.issn2377-3766-
dc.identifier.urihttp://hdl.handle.net/10203/298210-
dc.description.abstractRecently, self-supervised learning of depth and ego-motion from thermal images shows strong robustness and reliability under challenging scenarios. However, the inherent thermal image properties such as weak contrast, blurry edges, and noise hinder to generate effective self-supervision from thermal images. Therefore, most research relies on additional self-supervision sources such as well-lit RGB images, generative models, and Lidar information. In this letter, we conduct an in-depth analysis of thermal image characteristics that degenerates self-supervision from thermal images. Based on the analysis, we propose an effective thermal image mapping method that significantly increases image information, such as overall structure, contrast, and details, while preserving temporal consistency. The proposed method shows outperformed depth and pose results than previous state-of-the-art networks without leveraging additional RGB guidance.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleMaximizing Self-Supervision From Thermal Image for Effective Self-Supervised Learning of Depth and Ego-Motion-
dc.typeArticle-
dc.identifier.wosid000838377200010-
dc.identifier.scopusid2-s2.0-85133689282-
dc.type.rimsART-
dc.citation.volume7-
dc.citation.issue3-
dc.citation.beginningpage7771-
dc.citation.endingpage7778-
dc.citation.publicationnameIEEE ROBOTICS AND AUTOMATION LETTERS-
dc.identifier.doi10.1109/LRA.2022.3185382-
dc.contributor.localauthorKweon, In So-
dc.contributor.nonIdAuthorLee, Kyunghyun-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorAutonomous vehicle navigation-
dc.subject.keywordAuthorcomputer vision for transportation-
dc.subject.keywordAuthordeep learning for visual perception-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 6 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0