Familiarity based unified visual attention model for fast and robust object recognition

Cited 30 time in webofscience Cited 0 time in scopus
  • Hit : 825
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLee, Seung-Jinko
dc.contributor.authorKim, Kwan-Hoko
dc.contributor.authorKim, Joo-Youngko
dc.contributor.authorKim, Min-Suko
dc.contributor.authorYoo, Hoi-Junko
dc.date.accessioned2013-03-09T05:33:35Z-
dc.date.available2013-03-09T05:33:35Z-
dc.date.created2012-02-06-
dc.date.created2012-02-06-
dc.date.created2012-02-06-
dc.date.created2012-02-06-
dc.date.issued2010-03-
dc.identifier.citationPATTERN RECOGNITION, v.43, no.3, pp.1116 - 1128-
dc.identifier.issn0031-3203-
dc.identifier.urihttp://hdl.handle.net/10203/95493-
dc.description.abstractEven though visual attention models using bottom-up saliency can speed up object recognition by predicting object locations, in the presence of multiple salient objects, saliency alone cannot discern target objects from the clutter in a scene. Using a metric named familiarity, we propose a top-down method for guiding attention towards target objects, in addition to bottom-up saliency. To demonstrate the effectiveness of familiarity. the unified visual attention model (UVAM) which combines top-down familiarity and bottom-up saliency is applied to SIFT based object recognition. The UVAM is tested on 3600 artificially generated images containing COIL-100 objects with varying amounts of clutter, and on 126 images of real scenes. The recognition times are reduced by 2.7x and 2x, respectively, with no reduction in recognition accuracy, demonstrating the effectiveness and robustness of the familiarity based UVAM. (C) 2009 Elsevier Ltd. All rights reserved.-
dc.languageEnglish-
dc.publisherELSEVIER SCI LTD-
dc.titleFamiliarity based unified visual attention model for fast and robust object recognition-
dc.typeArticle-
dc.identifier.wosid000273094100051-
dc.identifier.scopusid2-s2.0-70449720613-
dc.type.rimsART-
dc.citation.volume43-
dc.citation.issue3-
dc.citation.beginningpage1116-
dc.citation.endingpage1128-
dc.citation.publicationnamePATTERN RECOGNITION-
dc.identifier.doi10.1016/j.patcog.2009.07.014-
dc.contributor.localauthorKim, Joo-Young-
dc.contributor.localauthorYoo, Hoi-Jun-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorVisual attention-
dc.subject.keywordAuthorObject recognition-
dc.subject.keywordAuthorScene analysis-
dc.subject.keywordPlusMECHANISMS-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 30 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0