Toward Robust Response Selection Model for Cross Negative Sampling Condition

Cited 1 time in webofscience Cited 0 time in scopus
  • Hit : 58
  • Download : 0
Open-domain dialogues can be formulated as a dialogue response selection task that selects the most appropriate response in a given dialogue context. In such a task, neural dialogue models typically predict responses based on context-response content similarity, but models with over-reliance on the similarity are not robust in real-world scenarios. Although various methods have been proposed to utilize adversarial negative responses enabling selection models to learn features beyond content similarity, selection models that rely on adversarial negative responses rather do not distinguish random negative responses well. We need robust selection models independent of the different types of negative responses. To mitigate this problem, this paper suggests the need for a novel training method or model architecture by identifying the performance of a selection model dependent on the type of negative response. Using both random and adversarial negative responses for training may be a basic solution, but we can observe that this solution still shows performance degradation in the random test dataset. Such an observation supports the need for robust selection models for the distribution shift of negative responses.
Publisher
IEEE
Issue Date
2022-01
Language
English
Citation

IEEE International Conference on Big Data and Smart Computing (BigComp), pp.395 - 397

ISSN
2375-933X
DOI
10.1109/BigComp54360.2022.00089
URI
http://hdl.handle.net/10203/298323
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 1 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0