Large Language Models are Few-shot Testers: Exploring LLM-based General Bug Reproduction

Cited 5 time in webofscience Cited 0 time in scopus
  • Hit : 39
  • Download : 0
Many automated test generation techniques have been developed to aid developers with writing tests. To facilitate full automation, most existing techniques aim to either increase coverage, or generate exploratory inputs. However, existing test generation techniques largely fall short of achieving more semantic objectives, such as generating tests to reproduce a given bug report. Reproducing bugs is nonetheless important, as our empirical study shows that the number of tests added in open source repositories due to issues was about 28% of the corresponding project test suite size. Meanwhile, due to the difficulties of transforming the expected program semantics in bug reports into test oracles, existing failure reproduction techniques tend to deal exclusively with program crashes, a small subset of all bug reports. To automate test generation from general bug reports, we propose Libro, a framework that uses Large Language Models (LLMs), which have been shown to be capable of performing code-related tasks. Since LLMs themselves cannot execute the target buggy code, we focus on post-processing steps that help us discern when LLMs are effective, and rank the produced tests according to their validity. Our evaluation of Libro shows that, on the widely studied Defects4J benchmark, Libro can generate failure reproducing test cases for 33% of all studied cases (251 out of 750), while suggesting a bug reproducing test in first place for 149 bugs. To mitigate data contamination (i.e., the possibility of the LLM simply remembering the test code either partially or in whole), we also evaluate Libro against 31 bug reports submitted after the collection of the LLM training data terminated: Libro produces bug reproducing tests for 32% of the studied bug reports. Overall, our results show Libro has the potential to significantly enhance developer efficiency by automatically generating tests from bug reports.
Publisher
IEEE
Issue Date
2023-05-17
Language
English
Citation

2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE)

ISSN
0270-5257
DOI
10.1109/icse48619.2023.00194
URI
http://hdl.handle.net/10203/314355
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 5 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0