FCSS: Fully Convolutional Self-Similarity for Dense Semantic Correspondence

Cited 18 time in webofscience Cited 0 time in scopus
  • Hit : 9
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKim, Seungryongko
dc.contributor.authorMin, Dongboko
dc.contributor.authorHam, Bumsubko
dc.contributor.authorLin, Stephenko
dc.contributor.authorSohn, Kwanghoonko
dc.date.accessioned2024-08-16T03:00:08Z-
dc.date.available2024-08-16T03:00:08Z-
dc.date.created2024-08-16-
dc.date.issued2019-03-
dc.identifier.citationIEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, v.41, no.3, pp.581 - 595-
dc.identifier.issn0162-8828-
dc.identifier.urihttp://hdl.handle.net/10203/322323-
dc.description.abstractWe present a descriptor, called fully convolutional self-similarity (FCSS), for dense semantic correspondence. Unlike traditional dense correspondence approaches for estimating depth or optical flow, semantic correspondence estimation poses additional challenges due to intra-class appearance and shape variations among different instances within the same object or scene category. To robustly match points across semantically similar images, we formulate FCSS using local self-similarity (LSS), which is inherently insensitive to intra-class appearance variations. LSS is incorporated through a proposed convolutional self-similarity (CSS) layer, where the sampling patterns and the self-similarity measure are jointly learned in an end-to-end and multi-scale manner. Furthermore, to address shape variations among different object instances, we propose a convolutional affine transformer (CAT) layer that estimates explicit affine transformation fields at each pixel to transform the sampling patterns and corresponding receptive fields. As training data for semantic correspondence is rather limited, we propose to leverage object candidate priors provided in most existing datasets and also correspondence consistency between object pairs to enable weakly-supervised learning. Experiments demonstrate that FCSS significantly outperforms conventional handcrafted descriptors and CNN-based descriptors on various benchmarks.-
dc.languageEnglish-
dc.publisherIEEE COMPUTER SOC-
dc.titleFCSS: Fully Convolutional Self-Similarity for Dense Semantic Correspondence-
dc.typeArticle-
dc.identifier.wosid000458168800005-
dc.identifier.scopusid2-s2.0-85041496768-
dc.type.rimsART-
dc.citation.volume41-
dc.citation.issue3-
dc.citation.beginningpage581-
dc.citation.endingpage595-
dc.citation.publicationnameIEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE-
dc.identifier.doi10.1109/TPAMI.2018.2803169-
dc.contributor.localauthorKim, Seungryong-
dc.contributor.nonIdAuthorMin, Dongbo-
dc.contributor.nonIdAuthorHam, Bumsub-
dc.contributor.nonIdAuthorLin, Stephen-
dc.contributor.nonIdAuthorSohn, Kwanghoon-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorconvolutional neural networks-
dc.subject.keywordAuthorself-similarity-
dc.subject.keywordAuthorweakly-supervised learning-
dc.subject.keywordAuthorDense semantic correspondence-
dc.subject.keywordPlusSCENES-
dc.subject.keywordPlusFLOW-
Appears in Collection
AI-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 18 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0