Self-supervised Label Augmentation via Input Transformations

Cited 89 time in webofscience Cited 0 time in scopus
  • Hit : 233
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLee, Hankookko
dc.contributor.authorHwang, Sung Juko
dc.contributor.authorShin, Jinwooko
dc.date.accessioned2020-12-11T05:10:22Z-
dc.date.available2020-12-11T05:10:22Z-
dc.date.created2020-12-02-
dc.date.created2020-12-02-
dc.date.created2020-12-02-
dc.date.issued2020-07-15-
dc.identifier.citation37th International Conference on Machine Learning, ICML 2020-
dc.identifier.issn2640-3498-
dc.identifier.urihttp://hdl.handle.net/10203/278214-
dc.description.abstractSelf-supervised learning, which learns by constructing artificial labels given only the input signals, has recently gained considerable attention for learning representations with unlabeled datasets, i.e., learning without any human-annotated supervision. In this paper, we show that such a technique can be used to significantly improve the model accuracy even under fully-labeled datasets. Our scheme trains the model to learn both original and self-supervised tasks, but is different from conventional multi-task learning frameworks that optimize the summation of their corresponding losses. Our main idea is to learn a single unified task with respect to the joint distribution of the original and self-supervised labels, i.e., we augment original labels via self-supervision of input transformation. This simple, yet effective approach allows to train models easier by relaxing a certain invariant constraint during learning the original and self-supervised tasks simultaneously. It also enables an aggregated inference which combines the predictions from different augmentations to improve the prediction accuracy. Furthermore, we propose a novel knowledge transfer technique, which we refer to as self-distillation, that has the effect of the aggregated inference in a single (faster) inference. We demonstrate the large accuracy improvement and wide applicability of our framework on various fully-supervised settings, e.g., the few-shot and imbalanced classification scenarios.-
dc.languageEnglish-
dc.publisherICML 2020 committee-
dc.titleSelf-supervised Label Augmentation via Input Transformations-
dc.typeConference-
dc.identifier.wosid000683178505078-
dc.identifier.scopusid2-s2.0-85092595793-
dc.type.rimsCONF-
dc.citation.publicationname37th International Conference on Machine Learning, ICML 2020-
dc.identifier.conferencecountryAU-
dc.identifier.conferencelocationVirtual-
dc.contributor.localauthorHwang, Sung Ju-
dc.contributor.localauthorShin, Jinwoo-
Appears in Collection
RIMS Conference Papers
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 89 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0