Saliency Refinement: Towards a Uniformly Highlighted Salient Object

Cited 1 time in webofscience Cited 0 time in scopus
  • Hit : 3619
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorEun, Hyunjunko
dc.contributor.authorKim, Yoonhyungko
dc.contributor.authorJung, Chanhoko
dc.contributor.authorKim, Chang-Ickko
dc.date.accessioned2018-01-22T08:59:57Z-
dc.date.available2018-01-22T08:59:57Z-
dc.date.created2017-12-22-
dc.date.created2017-12-22-
dc.date.created2017-12-22-
dc.date.issued2018-03-
dc.identifier.citationSIGNAL PROCESSING-IMAGE COMMUNICATION, v.62, pp.16 - 32-
dc.identifier.issn0923-5965-
dc.identifier.urihttp://hdl.handle.net/10203/237662-
dc.description.abstractHumans have a natural tendency to view a visually attractive (i.e., salient) object in its entirety. However, previous methods for salient object detection only highlight some parts of the salient object. This problem severely limits the adoption of such technologies to various computer vision and pattern recognition applications. To address the problem, in this paper, we present a novel framework to improve a saliency map obtained from recent state-of-the-art salient object detection approaches. Based on the fact that the L0 optimization can efficiently minimize variation between values, we integrate a background saliency and an initial saliency based on the nonlocal L0 optimization. In our work, we first extract background samples to estimate the background saliency building upon the initial saliency and color information. We then integrate the background saliency into the initial saliency by solving an optimization problem. We formulate the optimization problem based on the nonlocal L0 gradient to efficiently minimize the saliency variation in the salient object. To confirm the effectiveness of our proposed method, we apply the proposed framework to the saliency maps generated from state-of-the-art methods. Experimental results on benchmark datasets demonstrate that the proposed framework significantly improves the saliency maps. Furthermore, we compare the performance of two refinement frameworks and ours to prove the superiority of our work.-
dc.languageEnglish-
dc.publisherELSEVIER SCIENCE BV-
dc.titleSaliency Refinement: Towards a Uniformly Highlighted Salient Object-
dc.typeArticle-
dc.identifier.wosid000427209600002-
dc.identifier.scopusid2-s2.0-85042286149-
dc.type.rimsART-
dc.citation.volume62-
dc.citation.beginningpage16-
dc.citation.endingpage32-
dc.citation.publicationnameSIGNAL PROCESSING-IMAGE COMMUNICATION-
dc.identifier.doi10.1016/j.image.2017.12.003-
dc.contributor.localauthorKim, Chang-Ick-
dc.contributor.nonIdAuthorJung, Chanho-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorSalient object detection-
dc.subject.keywordAuthorSaliency refinement-
dc.subject.keywordAuthorA uniformly highlighted salient object-
dc.subject.keywordAuthorNonlocal L-0 optimization-
dc.subject.keywordPlusREGION DETECTION-
dc.subject.keywordPlusQUALITY ASSESSMENT-
dc.subject.keywordPlusVISUAL-ATTENTION-
dc.subject.keywordPlusIMAGE-
dc.subject.keywordPlusMODEL-
dc.subject.keywordPlusSEGMENTATION-
dc.subject.keywordPlusVIDEO-
dc.subject.keywordPlusCONTRAST-
dc.subject.keywordPlusTRACKING-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 1 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0