DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Yoo, Shin | - |
dc.contributor.advisor | 유신 | - |
dc.contributor.author | Kim, Junhwi | - |
dc.date.accessioned | 2019-09-04T02:47:09Z | - |
dc.date.available | 2019-09-04T02:47:09Z | - |
dc.date.issued | 2018 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=734098&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/267066 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 전산학부, 2018.2,[iv, 31 p. :] | - |
dc.description.abstract | Test data generation is a tedious and laborious process. Search-based Software Testing (SBST) automatically generates test data optimising structural test criteria using metaheuristic algorithms. In essence, metaheuristic algorithms are systematic trial-and-error based on the feedback of fitness function. This is similar to an agent of reinforcement learning which iteratively decides an action based on the current state to maximise the cumulative reward. Inspired by this analogy, this paper presents the approach of employing reinforcement learning to SBST to replace human designed metaheuristic algorithms. We reformulate the software under test (SUT) as an environment of reinforcement learning. At the same time, we present Gunpowder a novel framework for SBST which extends SUT to the environment of reinforcement learning. We trained Double Deep Q-Networks (DDQN) agent with deep neural network and evaluated the effectiveness of our approach by conducting an empirical study. Finally, we find that agents can learn metaheuristic algorithms for SBST achieving 100% branch coverage for trained function. Our architecture sheds light on the future integration of deep neural network and SBST. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | Software testing▼areinforcement learning▼atesting automation▼asearch-based software testing▼asearch-based software engineering | - |
dc.subject | 소프트웨어 테스팅▼a강화학습▼a검색기반 스프트웨어 테스팅▼a테스팅 자동화 | - |
dc.title | Generating test input with deep reinforcement learning | - |
dc.title.alternative | 강화학습을 활용한 테스트입력 생성기법 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :전산학부, | - |
dc.contributor.alternativeauthor | 김준휘 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.