Learning from Better Supervision: Self-distillation for Learning with Noisy Labels

Cited 3 time in webofscience Cited 0 time in scopus
  • Hit : 107
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorBaek, Kyungjuneko
dc.contributor.authorLee, Seunghoko
dc.contributor.authorShim, Hyunjungko
dc.date.accessioned2023-03-14T03:15:33Z-
dc.date.available2023-03-14T03:15:33Z-
dc.date.created2023-03-08-
dc.date.issued2022-08-23-
dc.identifier.citation26th International Conference on Pattern Recognition / 8th International Workshop on Image Mining - Theory and Applications (IMTA), pp.1829 - 1835-
dc.identifier.issn1051-4651-
dc.identifier.urihttp://hdl.handle.net/10203/305605-
dc.description.abstractThe remarkable performance of deep neural networks heavily rely on large-scale datasets with high-quality annotations. Since the data collection process such as web crawling naturally involves unreliable supervision (i.e., noisy label), handling samples with noisy labels has been actively studied. Existing methods in learning with noisy labels (LNL) 1) develop the sampling strategy for filtering out the noisy labels or 2) devise the robust loss function against noisy labels. As a result of these efforts, existing LNL models achieve impressive performance, recording a higher accuracy than the ratio of the clean samples in the dataset. Based on this observation, we propose a self-distillation framework to utilize the prediction of existing LNL models and further improve the performance via rectified distillation; hard pseudo label and feature distillation. Our rectified distillation can be easily applied to existing LNL models, thus we can enjoy their state-of-the-art performances. From extensive evaluations, we confirm that our model is effective on both synthetic and real noisy datasets with state-of-the-art performances on four benchmark datasets.-
dc.languageEnglish-
dc.publisherIEEE-
dc.titleLearning from Better Supervision: Self-distillation for Learning with Noisy Labels-
dc.typeConference-
dc.identifier.wosid000897707601117-
dc.identifier.scopusid2-s2.0-85143638816-
dc.type.rimsCONF-
dc.citation.beginningpage1829-
dc.citation.endingpage1835-
dc.citation.publicationname26th International Conference on Pattern Recognition / 8th International Workshop on Image Mining - Theory and Applications (IMTA)-
dc.identifier.conferencecountryCN-
dc.identifier.conferencelocationMontreal-
dc.identifier.doi10.1109/ICPR56361.2022.9956388-
dc.contributor.localauthorShim, Hyunjung-
dc.contributor.nonIdAuthorBaek, Kyungjune-
dc.contributor.nonIdAuthorLee, Seungho-
Appears in Collection
AI-Conference Papers(학술대회논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 3 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0