DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lee, Hankook | ko |
dc.contributor.author | Hwang, Sung Ju | ko |
dc.contributor.author | Shin, Jinwoo | ko |
dc.date.accessioned | 2020-12-11T05:10:22Z | - |
dc.date.available | 2020-12-11T05:10:22Z | - |
dc.date.created | 2020-12-02 | - |
dc.date.created | 2020-12-02 | - |
dc.date.created | 2020-12-02 | - |
dc.date.issued | 2020-07-15 | - |
dc.identifier.citation | 37th International Conference on Machine Learning, ICML 2020 | - |
dc.identifier.issn | 2640-3498 | - |
dc.identifier.uri | http://hdl.handle.net/10203/278214 | - |
dc.description.abstract | Self-supervised learning, which learns by constructing artificial labels given only the input signals, has recently gained considerable attention for learning representations with unlabeled datasets, i.e., learning without any human-annotated supervision. In this paper, we show that such a technique can be used to significantly improve the model accuracy even under fully-labeled datasets. Our scheme trains the model to learn both original and self-supervised tasks, but is different from conventional multi-task learning frameworks that optimize the summation of their corresponding losses. Our main idea is to learn a single unified task with respect to the joint distribution of the original and self-supervised labels, i.e., we augment original labels via self-supervision of input transformation. This simple, yet effective approach allows to train models easier by relaxing a certain invariant constraint during learning the original and self-supervised tasks simultaneously. It also enables an aggregated inference which combines the predictions from different augmentations to improve the prediction accuracy. Furthermore, we propose a novel knowledge transfer technique, which we refer to as self-distillation, that has the effect of the aggregated inference in a single (faster) inference. We demonstrate the large accuracy improvement and wide applicability of our framework on various fully-supervised settings, e.g., the few-shot and imbalanced classification scenarios. | - |
dc.language | English | - |
dc.publisher | ICML 2020 committee | - |
dc.title | Self-supervised Label Augmentation via Input Transformations | - |
dc.type | Conference | - |
dc.identifier.wosid | 000683178505078 | - |
dc.identifier.scopusid | 2-s2.0-85092595793 | - |
dc.type.rims | CONF | - |
dc.citation.publicationname | 37th International Conference on Machine Learning, ICML 2020 | - |
dc.identifier.conferencecountry | AU | - |
dc.identifier.conferencelocation | Virtual | - |
dc.contributor.localauthor | Hwang, Sung Ju | - |
dc.contributor.localauthor | Shin, Jinwoo | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.