Domain-adaptive and robust learning based on adversarial training도메인 변화 및 왜곡에 강건한 적대적 학습 기반 규제 알고리즘

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 220
  • Download : 0
Although a large number of studies have demonstrated the ability of deep neural networks to solve challenging tasks, it behave abnormally given the target data generated from shifted distribution. For example, the networks trained with data from a specific domain is prone to over-fitting, which makes it difficult to transfer to other domains. Moreover, the trained network is vulnerable to the human-imperceptible adversarial noise, questioning its stability. In this study, two adversarial regularization algorithms are proposed to solve the aforementioned problems. The main contributions of this thesis are as follows. First, we situate the domain adaptation in the context of information theory based on the mutual information between the domain label variable and representations. Based on the findings, we propose the adversarial training algorithm which can exploit the domain-shared information with a single discriminator. Moreover, inspired from the noise distribution experimentally found in the brain, the adversarial training algorithm is proposed to make the networks robust to the adversarial noise.
Advisors
Lee, Sang Wanresearcher이상완researcher
Description
한국과학기술원 :바이오및뇌공학과,
Publisher
한국과학기술원
Issue Date
2021
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 바이오및뇌공학과, 2021.2,[iv, 53 p. :]

Keywords

domain adaptation; robust learning; information regularization; adversarial training; artificial neural networks algorithm; 도메인 적응 학습; 강건한 학습; 정보량 규제; 적대적 학습; 신경망 알고리즘

URI
http://hdl.handle.net/10203/295277
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=948582&flag=dissertation
Appears in Collection
BiS-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0