Robust Estimation of Absolute Camera Pose via Intersection Constraint and Flow Consensus

Cited 4 time in webofscience Cited 3 time in scopus
  • Hit : 361
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLi, Haoangko
dc.contributor.authorZhao, Jiko
dc.contributor.authorBazin, Jean-Charlesko
dc.contributor.authorLiu, Yun-Huiko
dc.date.accessioned2020-07-23T02:55:03Z-
dc.date.available2020-07-23T02:55:03Z-
dc.date.created2020-07-20-
dc.date.created2020-07-20-
dc.date.created2020-07-20-
dc.date.issued2020-05-
dc.identifier.citationIEEE TRANSACTIONS ON IMAGE PROCESSING, v.29, pp.6615 - 6629-
dc.identifier.issn1057-7149-
dc.identifier.urihttp://hdl.handle.net/10203/275617-
dc.description.abstractEstimating the absolute camera pose requires 3D-to-2D correspondences of points and/or lines. However, in practice, these correspondences are inevitably corrupted by outliers, which affects the pose estimation. Existing outlier removal strategies for robust pose estimation have some limitations. They are only applicable to points, rely on prior pose information, or fail to handle high outlier ratios. By contrast, we propose a general and accurate outlier removal strategy. It can be integrated with various existing pose estimation methods originally vulnerable to outliers, and is applicable to points, lines, and the combination of both. Moreover, it does not rely on any prior pose information. Our strategy has a nested structure composed of the outer and inner modules. First, our outer module leverages our intersection constraint, i.e., the projection rays or planes defined by inliers intersect at the camera center. Our outer module alternately computes the inlier probabilities of correspondences and estimates the camera pose. It can run reliably and efficiently under high outlier ratios. Second, our inner module exploits our flow consensus. The 2D displacement vectors or 3D directed arcs generated by inliers exhibit a common directional regularity, i.e., follow a dominant trend of flow. Our inner module refines the inlier probabilities obtained at each iteration of our outer module. This refinement improves the accuracy and facilitates the convergence of our outer module. Experiments on both synthetic data and real-world images have shown that our method outperforms state-of-the-art approaches in terms of accuracy and robustness.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleRobust Estimation of Absolute Camera Pose via Intersection Constraint and Flow Consensus-
dc.typeArticle-
dc.identifier.wosid000545739000003-
dc.identifier.scopusid2-s2.0-85090152495-
dc.type.rimsART-
dc.citation.volume29-
dc.citation.beginningpage6615-
dc.citation.endingpage6629-
dc.citation.publicationnameIEEE TRANSACTIONS ON IMAGE PROCESSING-
dc.identifier.doi10.1109/TIP.2020.2992336-
dc.contributor.localauthorBazin, Jean-Charles-
dc.contributor.nonIdAuthorLi, Haoang-
dc.contributor.nonIdAuthorZhao, Ji-
dc.contributor.nonIdAuthorLiu, Yun-Hui-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorCameras-
dc.subject.keywordAuthorPose estimation-
dc.subject.keywordAuthorThree-dimensional displays-
dc.subject.keywordAuthorTwo dimensional displays-
dc.subject.keywordAuthorGravity-
dc.subject.keywordAuthorReliability-
dc.subject.keywordAuthorStructure from motion-
dc.subject.keywordAuthorabsolute camera pose-
dc.subject.keywordAuthoroutliers-
dc.subject.keywordAuthor3D-to-2D correspondences-
dc.subject.keywordAuthorpoints and-
dc.subject.keywordAuthoror lines-
dc.subject.keywordPlusLINE CORRESPONDENCES-
Appears in Collection
GCT-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 4 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0