Improved Emotion Recognition With a Novel Speaker-Independent Feature

Cited 49 time in webofscience Cited 0 time in scopus
  • Hit : 929
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKim, Eun Hoko
dc.contributor.authorHyun, Kyung Hakko
dc.contributor.authorKim, Soohyunko
dc.contributor.authorKwak, Yoon Keunko
dc.date.accessioned2013-03-09T16:39:25Z-
dc.date.available2013-03-09T16:39:25Z-
dc.date.created2012-02-06-
dc.date.created2012-02-06-
dc.date.issued2009-06-
dc.identifier.citationIEEE-ASME TRANSACTIONS ON MECHATRONICS, v.14, no.3, pp.317 - 325-
dc.identifier.issn1083-4435-
dc.identifier.urihttp://hdl.handle.net/10203/96882-
dc.description.abstractEmotion recognition is one of the latest challenges in human-robot interaction. This paper describes the realization of emotional interaction for a Thinking Robot, focusing on speech emotion recognition. In general, speaker-independent systems show a lower accuracy rate compared with speaker-dependent systems, as emotional feature values depend on the speaker and their gender. However, speaker-independent systems are required for commercial applications. In this paper, a novel speaker-independent feature, the ratio of a spectral flatness measure to a spectral center (RSS), with a small variation in speakers when constructing a speaker-independent system is proposed. Gender and emotion are hierarchically classified by using the proposed feature (RSS), pitch, energy, and the mel frequency cepstral coefficients. An average recognition rate of 57.2% (+/- 5.7%) at a 90% confidence interval is achieved with the proposed systems in the speaker-independent mode.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.subjectHUMAN-COMPUTER INTERACTION-
dc.subjectSPEECH RECOGNITION-
dc.subjectSYSTEM-
dc.subjectINTERFACE-
dc.subjectSTRESS-
dc.subjectNOISE-
dc.titleImproved Emotion Recognition With a Novel Speaker-Independent Feature-
dc.typeArticle-
dc.identifier.wosid000267438400005-
dc.identifier.scopusid2-s2.0-67650159383-
dc.type.rimsART-
dc.citation.volume14-
dc.citation.issue3-
dc.citation.beginningpage317-
dc.citation.endingpage325-
dc.citation.publicationnameIEEE-ASME TRANSACTIONS ON MECHATRONICS-
dc.identifier.doi10.1109/TMECH.2008.2008644-
dc.contributor.localauthorKim, Soohyun-
dc.contributor.localauthorKwak, Yoon Keun-
dc.contributor.nonIdAuthorKim, Eun Ho-
dc.contributor.nonIdAuthorHyun, Kyung Hak-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorEmotional interaction-
dc.subject.keywordAuthorintelligent robots-
dc.subject.keywordAuthorspeaker-independent system-
dc.subject.keywordAuthorspeech emotion recognition-
dc.subject.keywordPlusHUMAN-COMPUTER INTERACTION-
dc.subject.keywordPlusSPEECH RECOGNITION-
dc.subject.keywordPlusSYSTEM-
dc.subject.keywordPlusINTERFACE-
dc.subject.keywordPlusSTRESS-
dc.subject.keywordPlusNOISE-
Appears in Collection
ME-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 49 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0