Joint Learning of Generative Translator and Classifier for Visually Similar Classes

Cited 1 time in webofscience Cited 1 time in scopus
  • Hit : 325
  • Download : 281
DC FieldValueLanguage
dc.contributor.authorYoo, Byunginko
dc.contributor.authorSylvain, Tristanko
dc.contributor.authorBengio, Yoshuako
dc.contributor.authorKim, Junmoko
dc.date.accessioned2021-01-04T08:50:12Z-
dc.date.available2021-01-04T08:50:12Z-
dc.date.created2021-01-04-
dc.date.created2021-01-04-
dc.date.created2021-01-04-
dc.date.issued2020-
dc.identifier.citationIEEE ACCESS, v.8, pp.219160 - 219173-
dc.identifier.issn2169-3536-
dc.identifier.urihttp://hdl.handle.net/10203/279458-
dc.description.abstractIn this paper, we propose a Generative Translation Classification Network (GTCN) for improving visual classification accuracy in settings where classes are visually similar and data is scarce. For this purpose, we propose joint learning from a scratch to train a classifier and a generative stochastic translation network end-to-end. The translation network is used to perform on-line data augmentation across classes, whereas previous works have mostly involved domain adaptation. To help the model further benefit from this data-augmentation, we introduce an adaptive fade-in loss and a quadruplet loss. We perform experiments on multiple datasets to demonstrate the proposed method's performance in varied settings. Of particular interest, training on 40% of the dataset is enough for our model to surpass the performance of baselines trained on the full dataset. When our architecture is trained on the full dataset, we achieve comparable performance with state-of-the-art methods despite using a light-weight architecture.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleJoint Learning of Generative Translator and Classifier for Visually Similar Classes-
dc.typeArticle-
dc.identifier.wosid000597792200001-
dc.identifier.scopusid2-s2.0-85097774811-
dc.type.rimsART-
dc.citation.volume8-
dc.citation.beginningpage219160-
dc.citation.endingpage219173-
dc.citation.publicationnameIEEE ACCESS-
dc.identifier.doi10.1109/ACCESS.2020.3042302-
dc.contributor.localauthorKim, Junmo-
dc.contributor.nonIdAuthorSylvain, Tristan-
dc.contributor.nonIdAuthorBengio, Yoshua-
dc.description.isOpenAccessY-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorTraining-
dc.subject.keywordAuthorData models-
dc.subject.keywordAuthorTraining data-
dc.subject.keywordAuthorVisualization-
dc.subject.keywordAuthorAdaptation models-
dc.subject.keywordAuthorSemisupervised learning-
dc.subject.keywordAuthorFaces-
dc.subject.keywordAuthorArtificial neural networks-
dc.subject.keywordAuthorfeature extraction-
dc.subject.keywordAuthorimage classification-
dc.subject.keywordAuthorimage generation-
dc.subject.keywordAuthorpattern analysis-
dc.subject.keywordAuthorsemisupervised learning-
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 1 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0