Evaluating Surprise Adequacy for Question Answering

Cited 0 time in webofscience Cited 4 time in scopus
  • Hit : 145
  • Download : 0
With the wide and rapid adoption of Deep Neural Networks (DNNs) in various domains, an urgent need to validate their behaviour has risen, resulting in various test adequacy metrics for DNNs. One of the metrics, Surprise Adequacy (SA), aims to measure how surprising a new input is based on the similarity to the data used for training. While SA has been evaluated to be effective for image classifiers based on Convolutional Neural Networks (CNNs), it has not been studied for the Natural Language Processing (NLP) domain. This paper applies SA to NLP, in particular to the question answering task: the aim is to investigate whether SA correlates well with the correctness of answers. An empirical evaluation using the widely used Stanford Question Answering Dataset (SQuAD) shows that SA can work well as a test adequacy metric for the question answering task.
Publisher
Association for Computing Machinery, Inc
Issue Date
2020-06-27
Language
English
Citation

42nd IEEE/ACM International Conference on Software Engineering Workshops, ICSEW 2020

DOI
10.1145/3387940.3391465
URI
http://hdl.handle.net/10203/277204
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0