Dark Synthetic Vision: Lightweight Active Vision to Navigate in the Dark

Cited 4 time in webofscience Cited 1 time in scopus
  • Hit : 575
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKim, Joowanko
dc.contributor.authorJeon, Myung-Hwanko
dc.contributor.authorCho, Younggunko
dc.contributor.authorKim, Ayoungko
dc.date.accessioned2020-12-07T08:30:08Z-
dc.date.available2020-12-07T08:30:08Z-
dc.date.created2020-11-30-
dc.date.created2020-11-30-
dc.date.issued2021-01-
dc.identifier.citationIEEE ROBOTICS AND AUTOMATION LETTERS, v.6, no.1, pp.143 - 150-
dc.identifier.issn2377-3766-
dc.identifier.urihttp://hdl.handle.net/10203/278098-
dc.description.abstractOvercoming illumination variance is a critical factor in vision-based navigation. Existing methods tackled this radical illumination variance issue by proposing camera control or high dynamic range (HDR) image fusion. Despite these efforts, we have found that the vision-based approaches still suffer from overcoming darkness. This letter presents real-time image synthesizing from carefully controlled seed low dynamic range (LDR) image, to enable visual simultaneous localization and mapping (SLAM) in an extremely dark environment (less than 10 lux). Unlike existing methods, we elaborately select the seed LDR image for HDR fusion to secure interframe consistency, which is important in visual navigation. After selecting the seed image by camera control, we exploit camera response function (CRF) to synthesize HDR images in real-time without requiring GPU. We validate the algorithm via two extremely dark environments, an indoor environment without light and an outdoor night. In both test scenarios, the proposed method enabled reliable visual SLAM even when the light was limited.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleDark Synthetic Vision: Lightweight Active Vision to Navigate in the Dark-
dc.typeArticle-
dc.identifier.wosid000589691100003-
dc.identifier.scopusid2-s2.0-85128589767-
dc.type.rimsART-
dc.citation.volume6-
dc.citation.issue1-
dc.citation.beginningpage143-
dc.citation.endingpage150-
dc.citation.publicationnameIEEE ROBOTICS AND AUTOMATION LETTERS-
dc.identifier.doi10.1109/LRA.2020.3035137-
dc.contributor.localauthorKim, Ayoung-
dc.contributor.nonIdAuthorCho, Younggun-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorLow-light robot vision-
dc.subject.keywordAuthorimage fusion-
dc.subject.keywordAuthorvisual-based navigation-
dc.subject.keywordPlusVISUAL ODOMETRY-
dc.subject.keywordPlusENHANCEMENT-
Appears in Collection
CE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 4 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0