Improving Transferability of Representations via Augmentation-Aware Self-Supervision

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 146
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLee, Hankookko
dc.contributor.authorLee, Kibokko
dc.contributor.authorLee, Kiminko
dc.contributor.authorLee, Honglakko
dc.contributor.authorShin, Jinwooko
dc.date.accessioned2021-12-09T06:48:45Z-
dc.date.available2021-12-09T06:48:45Z-
dc.date.created2021-12-02-
dc.date.created2021-12-02-
dc.date.created2021-12-02-
dc.date.issued2021-12-07-
dc.identifier.citation35th Conference on Neural Information Processing Systems, NeurIPS 2021-
dc.identifier.urihttp://hdl.handle.net/10203/290298-
dc.description.abstractRecent unsupervised representation learning methods have shown to be effective in a range of vision tasks by learning representations invariant to data augmentations such as random cropping and color jittering. However, such invariance could be harmful to downstream tasks if they rely on the characteristics of the data augmentations, e.g., location- or color-sensitive. This is not an issue just for unsupervised learning; we found that this occurs even in supervised learning because it also learns to predict the same label for all augmented samples of an instance. To avoid such failures and obtain more generalizable representations, we suggest to optimize an auxiliary self-supervised loss, coined AugSelf, that learns the difference of augmentation parameters (e.g., cropping positions, color adjustment intensities) between two randomly augmented samples. Our intuition is that AugSelf encourages to preserve augmentation-aware information in learned representations, which could be beneficial for their transferability. Furthermore, AugSelf can easily be incorporated into recent state-of-the-art representation learning methods with a negligible additional training cost. Extensive experiments demonstrate that our simple idea consistently improves the transferability of representations learned by supervised and unsupervised methods in various transfer learning scenarios. The code is available at https://github.com/hankook/AugSelf.-
dc.languageEnglish-
dc.publisherNeural Information Processing Systems-
dc.titleImproving Transferability of Representations via Augmentation-Aware Self-Supervision-
dc.typeConference-
dc.identifier.scopusid2-s2.0-85128644297-
dc.type.rimsCONF-
dc.citation.publicationname35th Conference on Neural Information Processing Systems, NeurIPS 2021-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationVirtual-
dc.contributor.localauthorLee, Kimin-
dc.contributor.localauthorShin, Jinwoo-
dc.contributor.nonIdAuthorLee, Kibok-
dc.contributor.nonIdAuthorLee, Honglak-
Appears in Collection
AI-Conference Papers(학술대회논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0