DC Field | Value | Language |
---|---|---|
dc.contributor.author | Jeong, Seunghwa | ko |
dc.contributor.author | Lee, Jungjin | ko |
dc.contributor.author | Kim, Bumki | ko |
dc.contributor.author | Kim, Young Hui | ko |
dc.contributor.author | Noh, Junyong | ko |
dc.date.accessioned | 2019-04-15T14:34:49Z | - |
dc.date.available | 2019-04-15T14:34:49Z | - |
dc.date.created | 2017-11-03 | - |
dc.date.issued | 2018-10 | - |
dc.identifier.citation | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, v.40, no.10, pp.2455 - 2468 | - |
dc.identifier.issn | 0162-8828 | - |
dc.identifier.uri | http://hdl.handle.net/10203/254187 | - |
dc.description.abstract | We present a hybrid approach that segments an object by using both color and depth information obtained from views captured from a low-cost RGBD camera and sparsely-located color cameras. Our system begins with generating dense depth information of each target image by using Structure from Motion and Joint Bilateral Upsampling. We formulate the multi-view object segmentation as the Markov Random Field energy optimization on the graph constructed from the superpixels. To ensure inter-view consistency of the segmentation results between color images that have too few color features, our local mapping method generates dense inter-view geometric correspondences by using the dense depth images. Finally, the pixel-based optimization step refines the boundaries of the results obtained from the superpixel-based binary segmentation. We evaluate the validity of our method under various capture conditions such as numbers of views, rotations, and distances between cameras. We compared our method with the state-of-the-art methods that use the standard multi-view datasets. The comparison verified that the proposed method works very efficiently especially in a sparse wide-baseline capture environment. | - |
dc.language | English | - |
dc.publisher | IEEE COMPUTER SOC | - |
dc.subject | GRAPH CUTS | - |
dc.title | Object Segmentation Ensuring Consistency across Multi-viewpoint Images | - |
dc.type | Article | - |
dc.identifier.wosid | 000443875500013 | - |
dc.identifier.scopusid | 2-s2.0-85030789557 | - |
dc.type.rims | ART | - |
dc.citation.volume | 40 | - |
dc.citation.issue | 10 | - |
dc.citation.beginningpage | 2455 | - |
dc.citation.endingpage | 2468 | - |
dc.citation.publicationname | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE | - |
dc.identifier.doi | 10.1109/TPAMI.2017.2757928 | - |
dc.contributor.localauthor | Noh, Junyong | - |
dc.contributor.nonIdAuthor | Lee, Jungjin | - |
dc.contributor.nonIdAuthor | Kim, Young Hui | - |
dc.description.isOpenAccess | N | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | Multi-view segmentation | - |
dc.subject.keywordAuthor | wide-baseline capture environment | - |
dc.subject.keywordAuthor | inter-view consistency | - |
dc.subject.keywordAuthor | depth projection | - |
dc.subject.keywordPlus | GRAPH CUTS | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.