DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lee, Seung Ho | ko |
dc.contributor.author | Ro, Yong-Man | ko |
dc.date.accessioned | 2017-01-18T02:51:05Z | - |
dc.date.available | 2017-01-18T02:51:05Z | - |
dc.date.created | 2015-10-08 | - |
dc.date.created | 2015-10-08 | - |
dc.date.created | 2015-10-08 | - |
dc.date.created | 2015-10-08 | - |
dc.date.created | 2015-10-08 | - |
dc.date.issued | 2016-10 | - |
dc.identifier.citation | IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, v.7, no.4, pp.389 - 408 | - |
dc.identifier.issn | 1949-3045 | - |
dc.identifier.uri | http://hdl.handle.net/10203/219660 | - |
dc.description.abstract | Facial dynamics contain useful information for facial expression recognition (FER). However, exploiting dynamics in FER is challenging. This is mainly due to a variety of expression transitions. For example, video sequences belonging to a same emotion class may have different characteristics in transition duration and/or transition type (e.g., onset versus offset). The temporal mismatches between query and training video sequences could degrade the FER. This paper proposes a new partial matching framework that aims to overcome the temporal mismatch of expression transition. During the training stage, we construct an over-complete transition dictionary where many possible partial expression transitions are contained. During the test stage, we extract a number of partial expression transitions from a query video sequence. Each partial expression transition is analyzed individually. This increases the possibility of matching a partial expression transition in the query video sequence against the partial expression transitions in the over-complete transition dictionary. To make a partial matching subject-independent and robust to the temporal mismatch, each partial expression transition is defined as facial shape displacement between a pair of face clusters. Experimental results show that the proposed method is robust to variations of transition duration and transition type in subject-independent recognition. | - |
dc.language | English | - |
dc.publisher | Institute of Electrical and Electronics Engineers | - |
dc.title | Partial Matching of Facial Expression Sequence Using Over-Complete Transition Dictionary for Emotion Recognition | - |
dc.type | Article | - |
dc.identifier.wosid | 000389328800008 | - |
dc.identifier.scopusid | 2-s2.0-85027465989 | - |
dc.type.rims | ART | - |
dc.citation.volume | 7 | - |
dc.citation.issue | 4 | - |
dc.citation.beginningpage | 389 | - |
dc.citation.endingpage | 408 | - |
dc.citation.publicationname | IEEE TRANSACTIONS ON AFFECTIVE COMPUTING | - |
dc.identifier.doi | 10.1109/TAFFC.2015.2496320 | - |
dc.contributor.localauthor | Ro, Yong-Man | - |
dc.description.isOpenAccess | N | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | Facial expression recognition (FER) | - |
dc.subject.keywordAuthor | sparse representation based classifier (SRC) | - |
dc.subject.keywordAuthor | over-complete transition dictionary | - |
dc.subject.keywordAuthor | partial expression transition features | - |
dc.subject.keywordPlus | LOCAL BINARY PATTERNS | - |
dc.subject.keywordPlus | SPARSE REPRESENTATION | - |
dc.subject.keywordPlus | FACE RECOGNITION | - |
dc.subject.keywordPlus | IMAGE SEQUENCES | - |
dc.subject.keywordPlus | MINIMIZATION | - |
dc.subject.keywordPlus | REDUCTION | - |
dc.subject.keywordPlus | ALGORITHM | - |
dc.subject.keywordPlus | FEATURES | - |
dc.subject.keywordPlus | MANIFOLD | - |
dc.subject.keywordPlus | PCA | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.