Extending Contrastive Learning to Unsupervised Redundancy Identification

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 177
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorJu, Jeongwooko
dc.contributor.authorJung, Heechulko
dc.contributor.authorKim, Junmoko
dc.date.accessioned2022-04-15T06:43:46Z-
dc.date.available2022-04-15T06:43:46Z-
dc.date.created2022-03-14-
dc.date.created2022-03-14-
dc.date.created2022-03-14-
dc.date.issued2022-02-
dc.identifier.citationAPPLIED SCIENCES-BASEL, v.12, no.4-
dc.identifier.issn2076-3417-
dc.identifier.urihttp://hdl.handle.net/10203/294764-
dc.description.abstractModern deep neural network (DNN)-based approaches have delivered great performance for computer vision tasks; however, they require a massive annotation cost due to their data-hungry nature. Hence, given a fixed budget and unlabeled examples, improving the quality of examples to be annotated is a clever step to obtain good generalization of DNN. One of key issues that could hurt the quality of examples is the presence of redundancy, in which the most examples exhibit similar visual context (e.g., same background). Redundant examples barely contribute to the performance but rather require additional annotation cost. Hence, prior to the annotation process, identifying redundancy is a key step to avoid unnecessary cost. In this work, we proved that the coreset score based on cosine similarity (cossim) is effective for identifying redundant examples. This is because the collective magnitude of the gradient over redundant examples exhibits a large value compared to the others. As a result, contrastive learning first attempts to reduce the loss of redundancy. Consequently, cossim for the redundancy set exhibited a high value (low coreset score). We first viewed the redundancy identification as the gradient magnitude. In this way, we effectively removed redundant examples from two datasets (KITTI, BDD10K), resulting in a better performance in terms of detection and semantic segmentation.-
dc.languageEnglish-
dc.publisherMDPI-
dc.titleExtending Contrastive Learning to Unsupervised Redundancy Identification-
dc.typeArticle-
dc.identifier.wosid000762708900001-
dc.identifier.scopusid2-s2.0-85125135685-
dc.type.rimsART-
dc.citation.volume12-
dc.citation.issue4-
dc.citation.publicationnameAPPLIED SCIENCES-BASEL-
dc.identifier.doi10.3390/app12042201-
dc.contributor.localauthorKim, Junmo-
dc.contributor.nonIdAuthorJung, Heechul-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthordeep learning-
dc.subject.keywordAuthorredundancy identification-
dc.subject.keywordAuthorconvolutional neural network (CNN)-
dc.subject.keywordAuthorsemantic segmentation-
dc.subject.keywordAuthorobject detection-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0