DLCFT: Deep Linear Continual Fine-Tuning for General Incremental Learning

Cited 2 time in webofscience Cited 0 time in scopus
  • Hit : 58
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorShon, Hyoungukko
dc.contributor.authorLee, Janghyeonko
dc.contributor.authorKim, Seung Hwanko
dc.contributor.authorKim, Junmoko
dc.date.accessioned2022-11-21T02:00:18Z-
dc.date.available2022-11-21T02:00:18Z-
dc.date.created2022-11-19-
dc.date.created2022-11-19-
dc.date.issued2022-10-
dc.identifier.citationEuropean Conference on Computer Vision, ECCV 2022, pp.513 - 529-
dc.identifier.issn0302-9743-
dc.identifier.urihttp://hdl.handle.net/10203/300176-
dc.description.abstractPre-trained representation is one of the key elements in the success of modern deep learning. However, existing works on continual learning methods have mostly focused on learning models incrementally from scratch. In this paper, we explore an alternative framework to incremental learning where we continually fine-tune the model from a pre-trained representation. Our method takes advantage of linearization technique of a pre-trained neural network for simple and effective continual learning. We show that this allows us to design a linear model where quadratic parameter regularization method is placed as the optimal continual learning policy, and at the same time enjoying the high performance of neural networks. We also show that the proposed algorithm enables parameter regularization methods to be applied to classincremental problems. Additionally, we provide a theoretical reason why the existing parameter-space regularization algorithms such as EWC underperform on neural networks trained with cross-entropy loss. We show that the proposed method can prevent forgetting while achieving high continual fine-tuning performance on image classification tasks. To show that our method can be applied to general continual learning settings, we evaluate our method in data-incremental, task-incremental, and class-incremental learning problems.-
dc.languageEnglish-
dc.publisherSpringer Verlag-
dc.titleDLCFT: Deep Linear Continual Fine-Tuning for General Incremental Learning-
dc.typeConference-
dc.identifier.wosid000903572500030-
dc.identifier.scopusid2-s2.0-85142688160-
dc.type.rimsCONF-
dc.citation.beginningpage513-
dc.citation.endingpage529-
dc.citation.publicationnameEuropean Conference on Computer Vision, ECCV 2022-
dc.identifier.conferencecountryIS-
dc.identifier.conferencelocationTel Aviv-
dc.identifier.doi10.1007/978-3-031-19827-4_30-
dc.contributor.localauthorKim, Junmo-
dc.contributor.nonIdAuthorLee, Janghyeon-
dc.contributor.nonIdAuthorKim, Seung Hwan-
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 2 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0