Time-of-flight sensor and color camera calibration for multi-view acquisition

Cited 16 time in webofscience Cited 0 time in scopus
  • Hit : 164
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorShim, Hyunjungko
dc.contributor.authorAdelsberger, Rolfko
dc.contributor.authorKim, James Dokyoonko
dc.contributor.authorRhee, Seon-Minko
dc.contributor.authorRhee, Taehyunko
dc.contributor.authorSim, Jae-Youngko
dc.contributor.authorGross, Markusko
dc.contributor.authorKim, Changyeongko
dc.date.accessioned2022-07-04T06:00:46Z-
dc.date.available2022-07-04T06:00:46Z-
dc.date.created2022-07-04-
dc.date.issued2012-12-
dc.identifier.citationVISUAL COMPUTER, v.28, no.12, pp.1139 - 1151-
dc.identifier.issn0178-2789-
dc.identifier.urihttp://hdl.handle.net/10203/297180-
dc.description.abstractThis paper presents a multi-view acquisition system using multi-modal sensors, composed of time-of-flight (ToF) range sensors and color cameras. Our system captures the multiple pairs of color images and depth maps at multiple viewing directions. In order to ensure the acceptable accuracy of measurements, we compensate errors in sensor measurement and calibrate multi-modal devices. Upon manifold experiments and extensive analysis, we identify the major sources of systematic error in sensor measurement and construct an error model for compensation. As a result, we provide a practical solution for the real-time error compensation of depth measurement. Moreover, we implement the calibration scheme for multi-modal devices, unifying the spatial coordinate for multi-modal sensors. The main contribution of this work is to present the thorough analysis of systematic error in sensor measurement and therefore provide a reliable methodology for robust error compensation. The proposed system offers a real-time multi-modal sensor calibration method and thereby is applicable for the 3D reconstruction of dynamic scenes.-
dc.languageEnglish-
dc.publisherSPRINGER-
dc.titleTime-of-flight sensor and color camera calibration for multi-view acquisition-
dc.typeArticle-
dc.identifier.wosid000310538700001-
dc.identifier.scopusid2-s2.0-84869090899-
dc.type.rimsART-
dc.citation.volume28-
dc.citation.issue12-
dc.citation.beginningpage1139-
dc.citation.endingpage1151-
dc.citation.publicationnameVISUAL COMPUTER-
dc.identifier.doi10.1007/s00371-011-0664-x-
dc.contributor.localauthorShim, Hyunjung-
dc.contributor.nonIdAuthorAdelsberger, Rolf-
dc.contributor.nonIdAuthorKim, James Dokyoon-
dc.contributor.nonIdAuthorRhee, Seon-Min-
dc.contributor.nonIdAuthorRhee, Taehyun-
dc.contributor.nonIdAuthorSim, Jae-Young-
dc.contributor.nonIdAuthorGross, Markus-
dc.contributor.nonIdAuthorKim, Changyeong-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorDepth sensing-
dc.subject.keywordAuthorMulti-modal sensor fusion-
dc.subject.keywordAuthorMulti-view acquisition-
dc.subject.keywordAuthor3D video processing-
Appears in Collection
AI-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 16 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0