DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Kim, Daeyoung | - |
dc.contributor.advisor | 김대영 | - |
dc.contributor.author | Cho, Seungju | - |
dc.date.accessioned | 2021-05-13T19:38:23Z | - |
dc.date.available | 2021-05-13T19:38:23Z | - |
dc.date.issued | 2020 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=925163&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/285002 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 전산학부, 2020.8,[iv, 31 p. :] | - |
dc.description.abstract | In this paper, we focus on the defense model in response to adversarial attacks on the deep learning model in image classification and semantic segmentation task. Currently, deep learning is used in many fields and shows excellent performance in computer vision tasks such as image classification and semantic segmentation. However, it has been found that the deep learning model is highly vulnerable to small perturbation designed to attack the model against what it calls an adversarial attack. In this paper, we propose a preprocessing method against adversarial attacks and neutralize the effect of the adversarial attacks. More specifically, in image classification, we propose a preprocessing method using the tensor decomposition method and propose a denoise autoencoder in the semantic segmentation model. The model can be defended without modification, and our method shows that a relatively simple method can defend adversarial attacks. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | Computer Vision▼aAdversarial attack | - |
dc.subject | 컴퓨터 비전▼a적대적 공격 | - |
dc.title | Denoise autoencoder and tensor decomposition for the robustness against adversarial attacks | - |
dc.title.alternative | 잡음제거 오토인코더와 텐서 분해를 통해 적대적 공격에 대응한 방어 모델에 관한 연구 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :전산학부, | - |
dc.contributor.alternativeauthor | 조승주 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.