Object Segmentation Ensuring Consistency across Multi-viewpoint Images

Cited 6 time in webofscience Cited 6 time in scopus
  • Hit : 625
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorJeong, Seunghwako
dc.contributor.authorLee, Jungjinko
dc.contributor.authorKim, Bumkiko
dc.contributor.authorKim, Young Huiko
dc.contributor.authorNoh, Junyongko
dc.date.accessioned2019-04-15T14:34:49Z-
dc.date.available2019-04-15T14:34:49Z-
dc.date.created2017-11-03-
dc.date.issued2018-10-
dc.identifier.citationIEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, v.40, no.10, pp.2455 - 2468-
dc.identifier.issn0162-8828-
dc.identifier.urihttp://hdl.handle.net/10203/254187-
dc.description.abstractWe present a hybrid approach that segments an object by using both color and depth information obtained from views captured from a low-cost RGBD camera and sparsely-located color cameras. Our system begins with generating dense depth information of each target image by using Structure from Motion and Joint Bilateral Upsampling. We formulate the multi-view object segmentation as the Markov Random Field energy optimization on the graph constructed from the superpixels. To ensure inter-view consistency of the segmentation results between color images that have too few color features, our local mapping method generates dense inter-view geometric correspondences by using the dense depth images. Finally, the pixel-based optimization step refines the boundaries of the results obtained from the superpixel-based binary segmentation. We evaluate the validity of our method under various capture conditions such as numbers of views, rotations, and distances between cameras. We compared our method with the state-of-the-art methods that use the standard multi-view datasets. The comparison verified that the proposed method works very efficiently especially in a sparse wide-baseline capture environment.-
dc.languageEnglish-
dc.publisherIEEE COMPUTER SOC-
dc.subjectGRAPH CUTS-
dc.titleObject Segmentation Ensuring Consistency across Multi-viewpoint Images-
dc.typeArticle-
dc.identifier.wosid000443875500013-
dc.identifier.scopusid2-s2.0-85030789557-
dc.type.rimsART-
dc.citation.volume40-
dc.citation.issue10-
dc.citation.beginningpage2455-
dc.citation.endingpage2468-
dc.citation.publicationnameIEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE-
dc.identifier.doi10.1109/TPAMI.2017.2757928-
dc.contributor.localauthorNoh, Junyong-
dc.contributor.nonIdAuthorLee, Jungjin-
dc.contributor.nonIdAuthorKim, Young Hui-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorMulti-view segmentation-
dc.subject.keywordAuthorwide-baseline capture environment-
dc.subject.keywordAuthorinter-view consistency-
dc.subject.keywordAuthordepth projection-
dc.subject.keywordPlusGRAPH CUTS-
Appears in Collection
GCT-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 6 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0