DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Eun Ho | ko |
dc.contributor.author | Hyun, Kyung Hak | ko |
dc.contributor.author | Kim, Soohyun | ko |
dc.contributor.author | Kwak, Yoon Keun | ko |
dc.date.accessioned | 2013-03-09T16:39:25Z | - |
dc.date.available | 2013-03-09T16:39:25Z | - |
dc.date.created | 2012-02-06 | - |
dc.date.created | 2012-02-06 | - |
dc.date.issued | 2009-06 | - |
dc.identifier.citation | IEEE-ASME TRANSACTIONS ON MECHATRONICS, v.14, no.3, pp.317 - 325 | - |
dc.identifier.issn | 1083-4435 | - |
dc.identifier.uri | http://hdl.handle.net/10203/96882 | - |
dc.description.abstract | Emotion recognition is one of the latest challenges in human-robot interaction. This paper describes the realization of emotional interaction for a Thinking Robot, focusing on speech emotion recognition. In general, speaker-independent systems show a lower accuracy rate compared with speaker-dependent systems, as emotional feature values depend on the speaker and their gender. However, speaker-independent systems are required for commercial applications. In this paper, a novel speaker-independent feature, the ratio of a spectral flatness measure to a spectral center (RSS), with a small variation in speakers when constructing a speaker-independent system is proposed. Gender and emotion are hierarchically classified by using the proposed feature (RSS), pitch, energy, and the mel frequency cepstral coefficients. An average recognition rate of 57.2% (+/- 5.7%) at a 90% confidence interval is achieved with the proposed systems in the speaker-independent mode. | - |
dc.language | English | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.subject | HUMAN-COMPUTER INTERACTION | - |
dc.subject | SPEECH RECOGNITION | - |
dc.subject | SYSTEM | - |
dc.subject | INTERFACE | - |
dc.subject | STRESS | - |
dc.subject | NOISE | - |
dc.title | Improved Emotion Recognition With a Novel Speaker-Independent Feature | - |
dc.type | Article | - |
dc.identifier.wosid | 000267438400005 | - |
dc.identifier.scopusid | 2-s2.0-67650159383 | - |
dc.type.rims | ART | - |
dc.citation.volume | 14 | - |
dc.citation.issue | 3 | - |
dc.citation.beginningpage | 317 | - |
dc.citation.endingpage | 325 | - |
dc.citation.publicationname | IEEE-ASME TRANSACTIONS ON MECHATRONICS | - |
dc.identifier.doi | 10.1109/TMECH.2008.2008644 | - |
dc.contributor.localauthor | Kim, Soohyun | - |
dc.contributor.localauthor | Kwak, Yoon Keun | - |
dc.contributor.nonIdAuthor | Kim, Eun Ho | - |
dc.contributor.nonIdAuthor | Hyun, Kyung Hak | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | Emotional interaction | - |
dc.subject.keywordAuthor | intelligent robots | - |
dc.subject.keywordAuthor | speaker-independent system | - |
dc.subject.keywordAuthor | speech emotion recognition | - |
dc.subject.keywordPlus | HUMAN-COMPUTER INTERACTION | - |
dc.subject.keywordPlus | SPEECH RECOGNITION | - |
dc.subject.keywordPlus | SYSTEM | - |
dc.subject.keywordPlus | INTERFACE | - |
dc.subject.keywordPlus | STRESS | - |
dc.subject.keywordPlus | NOISE | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.