Joint Negative and Positive Learning for Noisy Labels

Cited 17 time in webofscience Cited 0 time in scopus
  • Hit : 347
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKim, Youngdongko
dc.contributor.authorYun, Juseungko
dc.contributor.authorShon, Hyoungukko
dc.contributor.authorKim, Junmoko
dc.date.accessioned2022-11-21T03:00:31Z-
dc.date.available2022-11-21T03:00:31Z-
dc.date.created2022-11-19-
dc.date.issued2021-06-
dc.identifier.citation2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021, pp.9437 - 9446-
dc.identifier.issn1063-6919-
dc.identifier.urihttp://hdl.handle.net/10203/300213-
dc.description.abstractTraining of Convolutional Neural Networks (CNNs) with data with noisy labels is known to be a challenge. Based on the fact that directly providing the label to the data (Positive Learning; PL) has a risk of allowing CNNs to memorize the contaminated labels for the case of noisy data, the indirect learning approach that uses complementary labels (Negative Learning for Noisy Labels; NLNL) has proven to be highly effective in preventing overfitting to noisy data as it reduces the risk of providing faulty target. NLNL further employs a three-stage pipeline to improve convergence. As a result, filtering noisy data through the NLNL pipeline is cumbersome, increasing the training cost. In this study, we propose a novel improvement of NLNL, named Joint Negative and Positive Learning (JNPL), that unifies the filtering pipeline into a single stage. JNPL trains CNN via two losses, NL+ and PL+, which are improved upon NL and PL loss functions, respectively. We analyze the fundamental issue of NL loss function and develop new NL+ loss function producing gradient that enhances the convergence of noisy data. Furthermore, PL+ loss function is designed to enable faster convergence to expected-to-be-clean data. We show that the NL+ and PL+ train CNN simultaneously, significantly simplifying the pipeline, allowing greater ease of practical use compared to NLNL. With a simple semi-supervised training technique, our method achieves state-of-the-art accuracy for noisy data classification based on the superior filtering ability.-
dc.languageEnglish-
dc.publisherIEEE Computer Society-
dc.titleJoint Negative and Positive Learning for Noisy Labels-
dc.typeConference-
dc.identifier.wosid000742075007049-
dc.identifier.scopusid2-s2.0-85123167151-
dc.type.rimsCONF-
dc.citation.beginningpage9437-
dc.citation.endingpage9446-
dc.citation.publicationname2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationVirtual-
dc.identifier.doi10.1109/CVPR46437.2021.00932-
dc.contributor.localauthorKim, Junmo-
dc.contributor.nonIdAuthorKim, Youngdong-
dc.contributor.nonIdAuthorYun, Juseung-
dc.contributor.nonIdAuthorShon, Hyounguk-
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 17 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0