Continual learning with linearized deep neural networks선형화된 심층 신경망을 이용한 지속 학습

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 75
  • Download : 0
We propose a continual learning algorithm that effectively mitigates catastrophic forgetting that occurs when a deep neural network is trained on multiple tasks sequentially. Our method takes advantage of the pre-training of neural networks for effective continual learning. Based on the observation that quadratic parameter regularization is able to achieve the optimal continual learning policy with linear models, our algorithm $\textit{linearizes}$ the neural network and applies quadratic penalty to parameters by estimating the Fisher information matrix. We show that the proposed method can prevent forgetting while achieving high performance on image classification tasks. Our method can be used in data incremental and task incremental learning problems.
Advisors
Kim, Junmoresearcher김준모researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2022
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2022.2,[iii, 21 p. :]

URI
http://hdl.handle.net/10203/309865
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=997205&flag=dissertation
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0