DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | 김대식 | - |
dc.contributor.author | Bae, Jaesung | - |
dc.contributor.author | 배재성 | - |
dc.date.accessioned | 2024-07-25T19:30:25Z | - |
dc.date.available | 2024-07-25T19:30:25Z | - |
dc.date.issued | 2019 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1044984&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/320439 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2019.2,[iv, 27 p. :] | - |
dc.description.abstract | In recent years, many automatic speech recognition (ASR) systems are using deep learning approaches, and ASR systems based on Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) are achieving the state-of-the-art results in various ASR benchmarks. Especially, due to their strength in capturing local features, the CNNs are broadly used in relatively short time dependency tasks such as phoneme level recognition or command recognition. However, the CNNs still have limitation that they do not consider any spacial relationship between low-level features. We applied the capsule network to overcome this problem. We compared our proposed capsule networks based SR systems with CNN based SR systems on one-second speech command dataset, and achieved a significantly better result than baseline CNN models in both clean and noisy environment. We also analyze the result by labels and noise types. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | 음성 인식▼a컨볼루셔널 뉴럴네트워크▼a캡슐 네트워크▼a라우팅-바이-어그리먼트▼a키워드 인식 | - |
dc.subject | Speech recognition▼aConvolutional neural network▼aCapsule network▼aRouting-by-agreement▼aKeyword recognition | - |
dc.title | Speech command recognition using capsule network | - |
dc.title.alternative | 캡슐 네트워크를 이용한 음성 단어 인식 시스템 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :전기및전자공학부, | - |
dc.contributor.alternativeauthor | Kim, Dae-Shik | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.