Multimodal Emotion Recognition Using Classifier Reliability-based Aggregation

Cited 1 time in webofscience Cited 0 time in scopus
  • Hit : 133
  • Download : 0
This paper addresses emotion recognition by first individually processing and then aggregating different modes of human communication through a classification and aggregation framework. Specifically, the proposed framework processes speech acoustics, facial expressions, and body language using unimodal emotion classifiers. The speech emotion is classified using a deep neural network (DNN) while facial and body language emotion classifiers are implemented using classifiers implemented through supervised fuzzy adaptive resonance theory. The speech emotion classifier uses acoustic features, the facial emotion classifier uses features based on facial animation parameters (FAP), and body language emotion classifier uses head and hands features. The unimodal evaluations are then aggregated this paper also proposes classifier reliability-based aggregation preferences for the unimodal evaluations. The reliability-based preferences are extracted from the accuracies of the unimodal classifiers for each emotion. The results show that the proposed framework outperforms the existing techniques. Furthermore, because of late fusion, the functionality of the proposed approach is robust to unavailability of all but one of the modes of communication.
Publisher
IEEE
Issue Date
2018-10
Language
English
Citation

IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp.135 - 140

ISSN
1062-922X
DOI
10.1109/SMC.2018.00034
URI
http://hdl.handle.net/10203/274858
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 1 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0