Feature Vector Classification based Speech Emotion Recognition for Service Robots

Cited 71 time in webofscience Cited 0 time in scopus
  • Hit : 643
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorPark, JSko
dc.contributor.authorKim, JHko
dc.contributor.authorOh, Yung-Hwanko
dc.date.accessioned2013-03-12T02:04:49Z-
dc.date.available2013-03-12T02:04:49Z-
dc.date.created2012-02-06-
dc.date.created2012-02-06-
dc.date.issued2009-08-
dc.identifier.citationIEEE TRANSACTIONS ON CONSUMER ELECTRONICS, v.55, pp.1590 - 1596-
dc.identifier.issn0098-3063-
dc.identifier.urihttp://hdl.handle.net/10203/101045-
dc.description.abstractThis paper proposes an efficient feature vector classification for Speech Emotion Recognition (SER) in service robots. Since service robots interact with diverse users who are in various emotional states, two important issues should be addressed: acoustically similar characteristics between emotions and variable speaker characteristics due to different user speaking styles. Each of these issue's may cause a substantial amount of overlap between emotion models in feature vector space, thus decreasing SER accuracy. In order to reduce the effects caused by such overlaps, this paper proposes an efficient feature vector classification for SER. The conventional feature vector classification applied to speaker identification categorizes feature vectors as overlapped and non-overlapped. Because this method discards all of the overlapped vectors in model reconstruction, it has limitations in constructing robust models when the number of overlapped vectors is significantly increased such as in emotion recognition. The method proposed herein classifies overlapped vectors in a more sophisticated manner, selecting discriminative vectors among overlapped vectors, and adds those vectors in model reconstruction. On SER experiments using an emotional speech corpus, the proposed classification approach exhibited superior performance to conventional methods, and displayed an almost human-level performance. In particular, we achieved commercially applicable performance for two-class (negative vs. non-negative) emotion recognition(1).-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleFeature Vector Classification based Speech Emotion Recognition for Service Robots-
dc.typeArticle-
dc.identifier.wosid000270358500088-
dc.identifier.scopusid2-s2.0-70350300961-
dc.type.rimsART-
dc.citation.volume55-
dc.citation.beginningpage1590-
dc.citation.endingpage1596-
dc.citation.publicationnameIEEE TRANSACTIONS ON CONSUMER ELECTRONICS-
dc.identifier.doi10.1109/TCE.2009.5278031-
dc.contributor.localauthorOh, Yung-Hwan-
dc.contributor.nonIdAuthorKim, JH-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorSpeech emotion recognition-
dc.subject.keywordAuthorFeature vector classification-
dc.subject.keywordAuthorService robot-
Appears in Collection
CS-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 71 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0