Improving multi-modal emotion recognition with counterpart data in dyadic conversations대화 과정에서 상대방 데이터를 활용한 멀티모달 감정 인식 개선

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 121
  • Download : 0
Today, as we live in numerous interactions, many studies have tried to predict human emotions. Since our daily life consists of countless interactions, it is better to predict human emotions between interactions. However, most studies have focused only on the speaker's data, not the counterpart's data, to predict the speaker's emotions because datasets which labeled human emotions in the naturalistic conversation are rare. In this study, we propose a method for predicting the emotions of the speaker in the naturalistic conversation using a speaker encoder and counterpart encoder composed of CNN-LSTM deep learning networks. We used emotion-related data called K-EmoCon collected during the debate process to empirically evaluate our model. The results showed that the counterpart's speech and the physiological signals had a positive impact on predicting the speaker's emotions. Through this paper, we hope to be helpful in the study of predicting emotions in naturalistic conversation.
Advisors
Lee, Uichinresearcher이의진researcher
Description
한국과학기술원 :지식서비스공학대학원,
Publisher
한국과학기술원
Issue Date
2022
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 지식서비스공학대학원, 2022.8,[iii, 27 p. :]

Keywords

Emotion recognition▼aAffective computing▼aNaturalistic conversation▼aInterpersonal features▼aDeep neural networks▼aMultimodal; 감정인식▼a감성컴퓨팅▼a일상대화▼a딥러닝▼a멀티모달

URI
http://hdl.handle.net/10203/309655
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1008429&flag=dissertation
Appears in Collection
KSE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0