Rescoring of N-Best Hypotheses Using Top-Down Selective Attention for Automatic Speech Recognition

Cited 3 time in webofscience Cited 0 time in scopus
  • Hit : 601
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKim, Ho-Gyeongko
dc.contributor.authorLee, Hwaranko
dc.contributor.authorKim, Geonminko
dc.contributor.authorOh, Sang-Hoonko
dc.contributor.authorLee, Soo-Youngko
dc.date.accessioned2018-01-30T05:48:18Z-
dc.date.available2018-01-30T05:48:18Z-
dc.date.created2018-01-15-
dc.date.created2018-01-15-
dc.date.created2018-01-15-
dc.date.created2018-01-15-
dc.date.issued2018-02-
dc.identifier.citationIEEE SIGNAL PROCESSING LETTERS, v.25, no.2, pp.199 - 203-
dc.identifier.issn1070-9908-
dc.identifier.urihttp://hdl.handle.net/10203/239445-
dc.description.abstractIn this letter, we propose an N-best rescoring system that integrates attentional information for locally confusing words extracted from alternative hypotheses to a conventional speech recognition system. The attentional information is derived by adapting a test input feature for the word of interest, which is motivated by the top-down selective attention mechanism of the brain. To rescore the competing hypotheses, we define a new confidence measure that contains both the conventional posterior probability and the attentional information for the confusing words. In addition, a neural network is designed to provide different weights within the confidence measure for each utterance. The network is then optimized to minimize the word error rates. Tests on the Wall Street Journal and Aurora4 speech recognition tasks were conducted, and our best results achieve a word error rate of 3.83% and 11.09%, yielding a relative reduction of 5.20% and 2.55% over baselines, respectively.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleRescoring of N-Best Hypotheses Using Top-Down Selective Attention for Automatic Speech Recognition-
dc.typeArticle-
dc.identifier.wosid000418868200004-
dc.identifier.scopusid2-s2.0-85034243864-
dc.type.rimsART-
dc.citation.volume25-
dc.citation.issue2-
dc.citation.beginningpage199-
dc.citation.endingpage203-
dc.citation.publicationnameIEEE SIGNAL PROCESSING LETTERS-
dc.identifier.doi10.1109/LSP.2017.2772828-
dc.contributor.localauthorLee, Soo-Young-
dc.contributor.nonIdAuthorOh, Sang-Hoon-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorContinuous speech recognition-
dc.subject.keywordAuthorN-best rescoring-
dc.subject.keywordAuthorparameter optimization-
dc.subject.keywordAuthortop-down selective attention-
dc.subject.keywordPlusDEEP NEURAL-NETWORKS-
dc.subject.keywordPlusBRAIN-
dc.subject.keywordPlusMODEL-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 3 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0