DC Field | Value | Language |
---|---|---|
dc.contributor.author | Park, Kihong | ko |
dc.contributor.author | Kim, Seungryong | ko |
dc.contributor.author | Sohn, Kwanghoon | ko |
dc.date.accessioned | 2024-08-16T03:00:05Z | - |
dc.date.available | 2024-08-16T03:00:05Z | - |
dc.date.created | 2024-08-16 | - |
dc.date.issued | 2020-01 | - |
dc.identifier.citation | IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, v.21, no.1, pp.321 - 335 | - |
dc.identifier.issn | 1524-9050 | - |
dc.identifier.uri | http://hdl.handle.net/10203/322320 | - |
dc.description.abstract | We address the problem of 3D reconstruction from uncalibrated LiDAR point cloud and stereo images. Since the usage of each sensor alone for 3D reconstruction has weaknesses in terms of density and accuracy, we propose a deep sensor fusion framework for high-precision depth estimation. The proposed architecture consists of calibration network and depth fusion network, where both networks are designed considering the trade-off between accuracy and efficiency for mobile devices. The calibration network first corrects an initial extrinsic parameter to align the input sensor coordinate systems. The accuracy of calibration is markedly improved by formulating the calibration in the depth domain. In the depth fusion network, complementary characteristics of sparse LiDAR and dense stereo depth are then encoded in a boosting manner. Since training data for the LiDAR and stereo depth fusion are rather limited, we introduce a simple but effective approach to generate pseudo ground truth labels from the raw KITTI dataset. The experimental evaluation verifies that the proposed method outperforms current state-of-the-art methods on the KITTI benchmark. We also collect data using our proprietary multi-sensor acquisition platform and verify that the proposed method generalizes across different sensor settings and scenes. | - |
dc.language | English | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.title | High-Precision Depth Estimation Using Uncalibrated LiDAR and Stereo Fusion | - |
dc.type | Article | - |
dc.identifier.wosid | 000506619900025 | - |
dc.identifier.scopusid | 2-s2.0-85077818508 | - |
dc.type.rims | ART | - |
dc.citation.volume | 21 | - |
dc.citation.issue | 1 | - |
dc.citation.beginningpage | 321 | - |
dc.citation.endingpage | 335 | - |
dc.citation.publicationname | IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS | - |
dc.identifier.doi | 10.1109/TITS.2019.2891788 | - |
dc.contributor.localauthor | Kim, Seungryong | - |
dc.contributor.nonIdAuthor | Park, Kihong | - |
dc.contributor.nonIdAuthor | Sohn, Kwanghoon | - |
dc.description.isOpenAccess | N | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | Laser radar | - |
dc.subject.keywordAuthor | Calibration | - |
dc.subject.keywordAuthor | Three-dimensional displays | - |
dc.subject.keywordAuthor | Robot sensing systems | - |
dc.subject.keywordAuthor | Cameras | - |
dc.subject.keywordAuthor | Estimation | - |
dc.subject.keywordAuthor | Interpolation | - |
dc.subject.keywordAuthor | Depth estimation | - |
dc.subject.keywordAuthor | multi-modal sensor fusion | - |
dc.subject.keywordAuthor | on-line calibration | - |
dc.subject.keywordAuthor | real-time system | - |
dc.subject.keywordAuthor | 3D reconstruction | - |
dc.subject.keywordPlus | NETWORK | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.