Learning Self-Informed Feature Contribution for Deep Learning-Based Acoustic Modeling

Cited 2 time in webofscience Cited 0 time in scopus
  • Hit : 550
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKim, Younggwanko
dc.contributor.authorKim, Myungjongko
dc.contributor.authorGoo, Jahyunko
dc.contributor.authorKim, Hoirinko
dc.date.accessioned2018-10-19T00:28:54Z-
dc.date.available2018-10-19T00:28:54Z-
dc.date.created2018-09-19-
dc.date.created2018-09-19-
dc.date.issued2018-11-
dc.identifier.citationIEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, v.26, no.11, pp.2204 - 2214-
dc.identifier.issn2329-9290-
dc.identifier.urihttp://hdl.handle.net/10203/245868-
dc.description.abstractIn this paper, we introduce a new feature engineering approach for deep learning-based acoustic modeling, which utilizes input feature contributions. For this purpose, we propose an auxiliary deep neural network (DNN) called a feature contribution network (FCN) whose output layer is composed of sigmoid-based contribution gates. In our framework, the FCN tries to learn element-level discriminative contributions of input features and an acoustic model network (AMN) is trained by gated features generated by element-wise multiplication between contribution gate outputs and input features. In addition, we also propose a regularization method for the FCN, which helps the FCN to activate the minimum number of the gates. The proposed methods were evaluated on the TED-LIUM release 1 corpus. We applied the proposed methods to DNN- and long short-term memory-based AMNs. Experimental results results showed that AMNs with the FCNs consistently improved recognition performance compared with AMN-only frameworks.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.subjectSPEECH RECOGNITION-
dc.subjectNEURAL-NETWORKS-
dc.subjectFEATURE-SELECTION-
dc.subjectCLASSIFICATION-
dc.titleLearning Self-Informed Feature Contribution for Deep Learning-Based Acoustic Modeling-
dc.typeArticle-
dc.identifier.wosid000443046300003-
dc.identifier.scopusid2-s2.0-85050603905-
dc.type.rimsART-
dc.citation.volume26-
dc.citation.issue11-
dc.citation.beginningpage2204-
dc.citation.endingpage2214-
dc.citation.publicationnameIEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING-
dc.identifier.doi10.1109/TASLP.2018.2858923-
dc.contributor.localauthorKim, Hoirin-
dc.contributor.nonIdAuthorKim, Myungjong-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorAcoustic modeling-
dc.subject.keywordAuthordeep learning-
dc.subject.keywordAuthorfeature contribution network-
dc.subject.keywordAuthorspeech recognition-
dc.subject.keywordPlusSPEECH RECOGNITION-
dc.subject.keywordPlusNEURAL-NETWORKS-
dc.subject.keywordPlusFEATURE-SELECTION-
dc.subject.keywordPlusCLASSIFICATION-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 2 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0