Regularization based on negative view for robust unsupervised domain adaptation강건한 비지도 영역 적응을 위한 부정적 시야 기반의 정규화

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 2
  • Download : 0
In the realm of Unsupervised Domain Adaptation (UDA), numerous endeavors have leveraged the attention mechanism and capabilities of Vision Transformers (ViTs), in addition to Convolutional Neural Networks. ViT-based approaches have notably outperformed CNN-based counterparts, yet a challenge arises from the patch-based structure inherent in ViT. Concretely, ViT heavily relies on local features within image patches, leading to diminished robustness when confronted with out-of-distribution (OOD) samples. To tackle the challenge, we introduce an unsupervised regularizer tailored for UDA scenarios. Our approach involves generating images with disrupted spatial context through negative augmentation, termed negative views, utilizing target-domain samples. Subsequently, we devise the Negative View-based Contrastive (NVC) regularizer, which separates the negative views from the original target samples in latent space. When integrated into existing UDA methods, the regularizer encourages ViT to prioritize context relations between local patches, enhancing the robustness of ViT. Our NVC regularizer is simply applicable to target domain which lacks labels, and it successfully raises the performance of existing baseline UDA method on a variety of established benchmarks. Furthermore, we introduce a novel dataset, Retail-71, comprising 71 classes of images of products commonly found in convenience stores. Notably, the domain gap between source and target domain in Retail-71 stems from hand occlusion and motion blur in samples. Hence, higher accuracy of testee model means its better robustness to hand occlusion and motion blur. Our experiments demonstrate the effectiveness of NVC regularizer in this specific domain, not only in existing domain. Collectively, the outcomes showcase the effectiveness of our suggested regularizer in enhancing the robustness of transformer within the UDA context.
Advisors
김대식researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2024
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2024.2,[vi, 46 p. :]

Keywords

심층 학습▼a비지도 영역 적응▼a부정적 증강▼a시각 변환기; Deep learning▼aUnsupervised domain adaptation▼aNegative augmentation▼aVision transformer

URI
http://hdl.handle.net/10203/321583
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1096801&flag=dissertation
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0