DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Yoo, Chang-Dong | - |
dc.contributor.advisor | 유창동 | - |
dc.contributor.author | Park, Youn-Sung | - |
dc.contributor.author | 박윤성 | - |
dc.date.accessioned | 2011-12-28T02:18:45Z | - |
dc.date.available | 2011-12-28T02:18:45Z | - |
dc.date.issued | 2010 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=455132&flag=dissertation | - |
dc.identifier.uri | http://hdl.handle.net/10203/54248 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 로봇공학학제전공, 2010.08, [ vi, 33 p. ] | - |
dc.description.abstract | In this thesis, a novel emotional singing voice synthesis system is considered. There were various approaches to express emotion between human and machine or robot through varying facial expression, action and synthesized speech of a robot. Although singing is known as an effective way for expressing emotion, there is no research using singing to express emotion. To synthesize the singing voice with emotion, the statistical parametric synthesis system is used. The statistical parametric synthesis system uses a singing database which is composed of various melodies sung neutrally with restricted set of words and hidden semi-Markov models (HSMMs) of notes ranging from G3 to E5 to construct statistical information. The procedure of statistical parametric synthesis system is composed of mainly two parts, training and synthesis. In training part, both spectrum and excitation parameter are extracted from a singing database, and the statistical information of spectrum and excitation parameter for each note is constructed. Three steps are taken in the synthesis part: (1) Pitch and duration are determined according to the notes indicated by the musical score; (2) Features are sampled from appropriate HSMMs with the duration set to the maximum probability; (3) Singing voice is synthesized by the mel-log spectrum approximation (MLSA) filter using the sampled features as parameters of the filter. Emotion of a synthesized song is controlled by varying the duration, the vibrato parameters and the timbre according to the Thayer`s mood model which defines emotions in tense and energy axis. Perception test is performed to evaluate the synthesized song. The results show that the algorithm can control the expressed emotion of a singing voice given a neutral singing database. | eng |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | Vibrato model | - |
dc.subject | Emotion expression | - |
dc.subject | Statistical singing voice synthesis | - |
dc.subject | Timbre conversion filter | - |
dc.subject | 음색 변조 필터 | - |
dc.subject | 비브라토 모델 | - |
dc.subject | 감정 표현 | - |
dc.subject | 통계학적 노래합성 | - |
dc.title | Emotional singing voice synthesis by changing duration, vibrato and timbre | - |
dc.title.alternative | 음 길이, 비브라토 그리고 음색의 변화를 이용한 감정 노래 합성 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 455132/325007 | - |
dc.description.department | 한국과학기술원 : 로봇공학학제전공, | - |
dc.identifier.uid | 020084053 | - |
dc.contributor.localauthor | Yoo, Chang-Dong | - |
dc.contributor.localauthor | 유창동 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.