Evaluating surprise adequacy on natural language processing자연어 처리의 놀라움 적합도 평가

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 187
  • Download : 0
With the broad and rapid adoption of Deep Neural Networks (DNNs) in various domains, an urgent need to validate their behaviour has risen, resulting in various test adequacy metrics for DNNs. One of the metrics, Surprise Adequacy (SA), aims to measure how surprising a new input is based on the similarity to the data used for training. While SA has been evaluated to be effective for image classifiers based on Convolutional Neural Networks (CNNs), it has not been studied for the Natural Language Processing (NLP) domain. This paper applies SA to NLP, in particular to three tasks: text classification, sequence labelling, and question answering task. The aim is to investigate whether SA correlates well with the correctness of the outputs. Also, SA enables prioritisation of failing inputs, thus, helps reducing the high labelling cost. An empirical evaluation shows that SA can generally work as a test adequacy metric in Natural Language Processing, especially for classification tasks.
Advisors
Yoo, Shinresearcher유신researcher
Description
한국과학기술원 :전산학부,
Publisher
한국과학기술원
Issue Date
2020
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전산학부, 2020.8,[iv, 32 p. :]

Keywords

Deep Learning▼aNatural Language Processing▼aSoftware Testing; 딥러닝▼a자연어 처리▼a소프트웨어 테스팅

URI
http://hdl.handle.net/10203/284994
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=925155&flag=dissertation
Appears in Collection
CS-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0