SimVODIS++: Neural Semantic Visual Odometry in Dynamic Environments

Cited 8 time in webofscience Cited 0 time in scopus
  • Hit : 297
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKim, Ue-Hwanko
dc.contributor.authorKim, Se-Hoko
dc.contributor.authorKim, Jong-Hwanko
dc.date.accessioned2022-04-13T06:47:37Z-
dc.date.available2022-04-13T06:47:37Z-
dc.date.created2022-04-04-
dc.date.created2022-04-04-
dc.date.created2022-04-04-
dc.date.issued2022-04-
dc.identifier.citationIEEE ROBOTICS AND AUTOMATION LETTERS, v.7, no.2, pp.4244 - 4251-
dc.identifier.issn2377-3766-
dc.identifier.urihttp://hdl.handle.net/10203/292557-
dc.description.abstractAccurate estimation of 3D geometry and camera motion enables a wide range of tasks in robotics and autonomous vehicles. However, the lack of semantics and the performance degradation due to dynamic objects hinder its application to real-world scenarios. To overcome these limitations, we design a novel neural semantic visual odometry (VO) architecture on top of the simultaneous VO, object detection and instance segmentation (SimVODIS) network. Next, we propose an attentive pose estimation architecture with a multi-task learning formulation for handling dynamic objects and VO performance enhancement. The extensive experiments conducted in our work attest that the proposed SimVODIS++ improves the VO performance in dynamic environments. Further, SimVODIS++ focuses on salient regions while excluding feature-less regions. Performing the experiments, we have discovered and fixed the data leakage problem in the conventional experiment setting followed by numerous previous works-which we claim as one of our contributions. We make the source code public.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleSimVODIS++: Neural Semantic Visual Odometry in Dynamic Environments-
dc.typeArticle-
dc.identifier.wosid000761228500003-
dc.identifier.scopusid2-s2.0-85124836170-
dc.type.rimsART-
dc.citation.volume7-
dc.citation.issue2-
dc.citation.beginningpage4244-
dc.citation.endingpage4251-
dc.citation.publicationnameIEEE ROBOTICS AND AUTOMATION LETTERS-
dc.identifier.doi10.1109/LRA.2022.3150854-
dc.contributor.localauthorKim, Jong-Hwan-
dc.contributor.nonIdAuthorKim, Ue-Hwan-
dc.contributor.nonIdAuthorKim, Se-Ho-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorSemantics-
dc.subject.keywordAuthorFeature extraction-
dc.subject.keywordAuthorDynamics-
dc.subject.keywordAuthorPose estimation-
dc.subject.keywordAuthorCameras-
dc.subject.keywordAuthorVehicle dynamics-
dc.subject.keywordAuthorComputer architecture-
dc.subject.keywordAuthorSemantic scene understanding-
dc.subject.keywordAuthorvisual odometry (VO)-
dc.subject.keywordAuthorsemantic SLAM-
dc.subject.keywordAuthorsemantic VO-
dc.subject.keywordAuthordynamic objects-
dc.subject.keywordAuthorrobotics-
dc.subject.keywordAuthorautonomous vehicles-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 8 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0