DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Joowan | ko |
dc.contributor.author | Jeon, Myung-Hwan | ko |
dc.contributor.author | Cho, Younggun | ko |
dc.contributor.author | Kim, Ayoung | ko |
dc.date.accessioned | 2020-12-07T08:30:08Z | - |
dc.date.available | 2020-12-07T08:30:08Z | - |
dc.date.created | 2020-11-30 | - |
dc.date.created | 2020-11-30 | - |
dc.date.issued | 2021-01 | - |
dc.identifier.citation | IEEE ROBOTICS AND AUTOMATION LETTERS, v.6, no.1, pp.143 - 150 | - |
dc.identifier.issn | 2377-3766 | - |
dc.identifier.uri | http://hdl.handle.net/10203/278098 | - |
dc.description.abstract | Overcoming illumination variance is a critical factor in vision-based navigation. Existing methods tackled this radical illumination variance issue by proposing camera control or high dynamic range (HDR) image fusion. Despite these efforts, we have found that the vision-based approaches still suffer from overcoming darkness. This letter presents real-time image synthesizing from carefully controlled seed low dynamic range (LDR) image, to enable visual simultaneous localization and mapping (SLAM) in an extremely dark environment (less than 10 lux). Unlike existing methods, we elaborately select the seed LDR image for HDR fusion to secure interframe consistency, which is important in visual navigation. After selecting the seed image by camera control, we exploit camera response function (CRF) to synthesize HDR images in real-time without requiring GPU. We validate the algorithm via two extremely dark environments, an indoor environment without light and an outdoor night. In both test scenarios, the proposed method enabled reliable visual SLAM even when the light was limited. | - |
dc.language | English | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.title | Dark Synthetic Vision: Lightweight Active Vision to Navigate in the Dark | - |
dc.type | Article | - |
dc.identifier.wosid | 000589691100003 | - |
dc.identifier.scopusid | 2-s2.0-85128589767 | - |
dc.type.rims | ART | - |
dc.citation.volume | 6 | - |
dc.citation.issue | 1 | - |
dc.citation.beginningpage | 143 | - |
dc.citation.endingpage | 150 | - |
dc.citation.publicationname | IEEE ROBOTICS AND AUTOMATION LETTERS | - |
dc.identifier.doi | 10.1109/LRA.2020.3035137 | - |
dc.contributor.localauthor | Kim, Ayoung | - |
dc.contributor.nonIdAuthor | Cho, Younggun | - |
dc.description.isOpenAccess | N | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | Low-light robot vision | - |
dc.subject.keywordAuthor | image fusion | - |
dc.subject.keywordAuthor | visual-based navigation | - |
dc.subject.keywordPlus | VISUAL ODOMETRY | - |
dc.subject.keywordPlus | ENHANCEMENT | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.