DC Field | Value | Language |
---|---|---|
dc.contributor.author | Park, JS | ko |
dc.contributor.author | Kim, JH | ko |
dc.contributor.author | Oh, Yung-Hwan | ko |
dc.date.accessioned | 2013-03-12T02:04:49Z | - |
dc.date.available | 2013-03-12T02:04:49Z | - |
dc.date.created | 2012-02-06 | - |
dc.date.created | 2012-02-06 | - |
dc.date.issued | 2009-08 | - |
dc.identifier.citation | IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, v.55, pp.1590 - 1596 | - |
dc.identifier.issn | 0098-3063 | - |
dc.identifier.uri | http://hdl.handle.net/10203/101045 | - |
dc.description.abstract | This paper proposes an efficient feature vector classification for Speech Emotion Recognition (SER) in service robots. Since service robots interact with diverse users who are in various emotional states, two important issues should be addressed: acoustically similar characteristics between emotions and variable speaker characteristics due to different user speaking styles. Each of these issue's may cause a substantial amount of overlap between emotion models in feature vector space, thus decreasing SER accuracy. In order to reduce the effects caused by such overlaps, this paper proposes an efficient feature vector classification for SER. The conventional feature vector classification applied to speaker identification categorizes feature vectors as overlapped and non-overlapped. Because this method discards all of the overlapped vectors in model reconstruction, it has limitations in constructing robust models when the number of overlapped vectors is significantly increased such as in emotion recognition. The method proposed herein classifies overlapped vectors in a more sophisticated manner, selecting discriminative vectors among overlapped vectors, and adds those vectors in model reconstruction. On SER experiments using an emotional speech corpus, the proposed classification approach exhibited superior performance to conventional methods, and displayed an almost human-level performance. In particular, we achieved commercially applicable performance for two-class (negative vs. non-negative) emotion recognition(1). | - |
dc.language | English | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.title | Feature Vector Classification based Speech Emotion Recognition for Service Robots | - |
dc.type | Article | - |
dc.identifier.wosid | 000270358500088 | - |
dc.identifier.scopusid | 2-s2.0-70350300961 | - |
dc.type.rims | ART | - |
dc.citation.volume | 55 | - |
dc.citation.beginningpage | 1590 | - |
dc.citation.endingpage | 1596 | - |
dc.citation.publicationname | IEEE TRANSACTIONS ON CONSUMER ELECTRONICS | - |
dc.identifier.doi | 10.1109/TCE.2009.5278031 | - |
dc.contributor.localauthor | Oh, Yung-Hwan | - |
dc.contributor.nonIdAuthor | Kim, JH | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | Speech emotion recognition | - |
dc.subject.keywordAuthor | Feature vector classification | - |
dc.subject.keywordAuthor | Service robot | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.