Unsupervised Intra-domain Adaptation for Semantic Segmentation through Self-Supervision

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 18
  • Download : 0
Convolutional neural network-based approaches have achieved remarkable progress in semantic segmentation. However, these approaches heavily rely on annotated data which are labor intensive. To cope with this limitation, automatically annotated data generated from graphic engines are used to train segmentation models. However, the models trained from synthetic data are difficult to transfer to real images. To tackle this issue, previous works have considered directly adapting models from the source data to the unlabeled target data (to reduce the inter-domain gap). Nonetheless, these techniques do not consider the large distribution gap among the target data itself (intra-domain gap). In this work, we propose a two-step self-supervised domain adaptation approach to minimize the inter-domain and intra-domain gap together. First, we conduct the inter-domain adaptation of the model, from this adaptation, we separate target domain into an easy and hard split using an entropy-based ranking function. Finally, to decrease the intra-domain gap, we propose to employ a self-supervised adaptation technique from the easy to the hard subdomain. Experimental results on numerous benchmark datasets highlight the effectiveness of our method against existing state-of-the-art approaches. The source code is available at https://github.com/feipan664/IntraDA.git.
Publisher
IEEE Conference on Computer Vision and Pattern Recognition
Issue Date
2020-06
Language
English
Citation

IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, pp.3763 - 3772

ISSN
1063-6919
DOI
10.1109/CVPR42600.2020.00382
URI
http://hdl.handle.net/10203/278669
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0