Dual network based complementary learning system for continual learning

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 76
  • Download : 0
Catastrophic forgetting is a well-known problem when it comes to training neural networks in the continual learning setting. The main focus of research has been on training a single network when addressing the problem. In our work, we explore a dual network approach. We propose a brain-inspired complementary dual network model for continual learning that comprises a fast learner and a slow consolidator. The fast learner first adapts to a new task seen only once, and the slow consolidator distills the new task information using knowledge distillation from the fast learner. The two networks are trained in an alternate manner. To consolidate the learning of a new task with the learning of past tasks, we employ a small memory of each task for replay during the training of the slow consolidator. In addition, we incorporate a context-based gating mechanism on the slow consolidator, and empirically prove its positive impact on the performance of the proposed model. We show the improved results of the proposed model on several classification datasets. © 2021 IEEE.
Publisher
Institute of Electrical and Electronics Engineers Inc.
Issue Date
2021-07
Language
English
Citation

2021 IEEE/CIC International Conference on Communications in China, ICCC Workshops 2021, pp.112 - 117

ISSN
2474-9133
DOI
10.1109/ICCCWorkshops52231.2021.9538861
URI
http://hdl.handle.net/10203/288645
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0