RobustNet: Improving Domain Generalization in Urban-Scene Segmentation via Instance Selective Whitening

Cited 118 time in webofscience Cited 0 time in scopus
  • Hit : 141
  • Download : 0
Enhancing the generalization capability of deep neural networks to unseen domains is crucial for safety-critical applications in the real world such as autonomous driving. To address this issue, this paper proposes a novel instance selective whitening loss to improve the robustness of the segmentation networks for unseen domains. Our approach disentangles the domain-specific style and domain-invariant content encoded in higher-order statistics (i.e., feature covariance) of the feature representations and selectively removes only the style information causing domain shift. As shown in Fig. 1, our method provides reasonable predictions for (a) low-illuminated, (b) rainy, and (c) unseen structures. These types of images are not included in the training dataset, where the baseline shows a significant performance drop, contrary to ours. Being simple yet effective, our approach improves the robustness of various backbone networks without additional computational cost. We conduct extensive experiments in urban-scene segmentation and show the superiority of our approach to existing work. Our code is available at this link(1).
Publisher
IEEE Computer Vision and Pattern Recognition
Issue Date
2021-06-19
Language
English
Citation

IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp.11575 - 11585

ISSN
1063-6919
DOI
10.1109/CVPR46437.2021.01141
URI
http://hdl.handle.net/10203/290426
Appears in Collection
AI-Conference Papers(학술대회논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 118 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0