Incremental learning with unlabeled data in the wild

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 46
  • Download : 0
We propose to leverage a continuous and large stream of unlabeled data in the wild to alleviate catastrophic forgetting in class-incremental learning. Our experimental results on CIFAR and ImageNet datasets demonstrate the superiority of the proposed methods over prior methods: compared to the state-of-the-art method, our proposed method shows up to 14.9% higher accuracy and 45.9% less forgetting.
Publisher
IEEE Computer Society
Issue Date
2019-06
Language
English
Citation

32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2019, pp.29 - 32

ISSN
2160-7508
URI
http://hdl.handle.net/10203/311140
Appears in Collection
AI-Conference Papers(학술대회논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0