Facial Dynamics Interpreter Network: What are the Important Relations between Local Dynamics for Facial Trait Estimation?

Cited 4 time in webofscience Cited 0 time in scopus
  • Hit : 230
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKim, Seong Taeko
dc.contributor.authorRo, Yong Manko
dc.date.accessioned2018-09-18T06:06:16Z-
dc.date.available2018-09-18T06:06:16Z-
dc.date.created2018-07-31-
dc.date.created2018-07-31-
dc.date.created2018-07-31-
dc.date.created2018-07-31-
dc.date.created2018-07-31-
dc.date.created2018-07-31-
dc.date.created2018-07-31-
dc.date.issued2018-09-10-
dc.identifier.citationEuropean Conference on Computer Vision, ECCV 2018, pp.475 - 491-
dc.identifier.urihttp://hdl.handle.net/10203/245510-
dc.description.abstractHuman face analysis is an important task in computer vision. According to cognitive-psychological studies, facial dynamics could provide crucial cues for face analysis. The motion of a facial local region in facial expression is related to the motion of other facial local regions. In this paper, a novel deep learning approach, named facial dynamics interpreter network, has been proposed to interpret the important relations between local dynamics for estimating facial traits from expression sequence. The facial dynamics interpreter network is designed to be able to encode a relational importance, which is used for interpreting the relation between facial local dynamics and estimating facial traits. By comparative experiments, the effectiveness of the proposed method has been verified. The important relations between facial local dynamics are investigated by the proposed facial dynamics interpreter network in gender classification and age estimation. Moreover, experimental results show that the proposed method outperforms the state-of-the-art methods in gender classification and age estimation.-
dc.languageEnglish-
dc.publisherEuropean Conference on Computer Vision Committee-
dc.titleFacial Dynamics Interpreter Network: What are the Important Relations between Local Dynamics for Facial Trait Estimation?-
dc.typeConference-
dc.identifier.wosid000604449400029-
dc.identifier.scopusid2-s2.0-85055107263-
dc.type.rimsCONF-
dc.citation.beginningpage475-
dc.citation.endingpage491-
dc.citation.publicationnameEuropean Conference on Computer Vision, ECCV 2018-
dc.identifier.conferencecountryGE-
dc.identifier.conferencelocationGasteig, Munich-
dc.identifier.doi10.1007/978-3-030-01258-8_29-
dc.contributor.localauthorRo, Yong Man-
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 4 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0