Latent Question Interpretation Through Parameter Adaptation Using Stochastic Neuron

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 187
  • Download : 0
Many neural network-based question-answering models rely on complex attention mechanisms but they are limited in their ability to capture natural language variability, and to generate diverse and/or reasonable answers. To address this limitation, we propose a module that learns the diversity of the possible interpretations for a given question. In order to identify the possible span of the respective answers, parameters for our question-answering model are adapted using the value of the discrete”interpretation neuron”. Additionally, we formulate a semi-supervised variational inference framework and fine-tune the final policy using the rewards from the answer accuracy with the policy gradient optimization. We demonstrate sample answers with induced latent interpretations, suggesting that our model has successfully discovered multiple ways of understanding for a given question. When tested using the Stanford Question Answering Dataset (SQuAD), our model outperformed the current baseline, suggesting the potential validity of the approach described in this work. We open source our implementation in PyTorch1
Publisher
CEUR-WS
Issue Date
2018-07-13
Language
English
Citation

10th International Workshop Modelling and Reasoning in Context, MRC 2018, pp.46 - 55

ISSN
1613-0073
URI
http://hdl.handle.net/10203/248889
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0