Cross-Language Neural Dialog State Tracker for Large Ontologies Using Hierarchical Attention

Cited 7 time in webofscience Cited 0 time in scopus
  • Hit : 734
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorJang, Youngsooko
dc.contributor.authorHam, Jiyeonko
dc.contributor.authorLee, Byung-Junko
dc.contributor.authorKim, Kee-Eungko
dc.date.accessioned2018-09-18T05:50:19Z-
dc.date.available2018-09-18T05:50:19Z-
dc.date.created2018-08-27-
dc.date.created2018-08-27-
dc.date.created2018-08-27-
dc.date.created2018-08-27-
dc.date.issued2018-11-
dc.identifier.citationIEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, v.26, no.11, pp.2072 - 2082-
dc.identifier.issn2329-9290-
dc.identifier.urihttp://hdl.handle.net/10203/245381-
dc.description.abstractDialog state tracking, which refers to identifying the user intent from utterances, is one of the most important tasks in dialog management. In this paper, we present our dialog state tracker developed for the fifth dialog state tracking challenge, which focused on cross-language adaptation using a very scarce machine-translated training data when compared to the size of the ontology. Our dialog state tracker is based on the bi-directional long short-term memory network with a hierarchical attention mechanism in order to spot important words in user utterances. The user intent is predicted by finding the closest keyword in the ontology to the attention-weighted word vector. With the suggested methodology, our tracker can overcome various difficulties due to the scarce training data that existing machine learning-based trackers had, such as predicting user intents they have not seen before. We show that our tracker outperforms other trackers submitted to the challenge with respect to most of the performance measures.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleCross-Language Neural Dialog State Tracker for Large Ontologies Using Hierarchical Attention-
dc.typeArticle-
dc.identifier.wosid000441430600010-
dc.identifier.scopusid2-s2.0-85049346354-
dc.type.rimsART-
dc.citation.volume26-
dc.citation.issue11-
dc.citation.beginningpage2072-
dc.citation.endingpage2082-
dc.citation.publicationnameIEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING-
dc.identifier.doi10.1109/TASLP.2018.2852492-
dc.contributor.localauthorKim, Kee-Eung-
dc.contributor.nonIdAuthorHam, Jiyeon-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorDialog state tracking-
dc.subject.keywordAuthorattention mechanism-
dc.subject.keywordAuthorhierarchical attention mechanism-
dc.subject.keywordAuthorlong short term memory-
dc.subject.keywordAuthorcross language-
Appears in Collection
AI-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 7 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0