Continual learning using image-conditional prompt for vision-language pre-training이미지 조건부 프롬프트를 사용한 비전-텍스트 교육의 지속적인 학습

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 4
  • Download : 0
Deep learning has achieved remarkable performance in several areas, but fine-tuning pre-trained models based on new data can lead to catastrophic forgetting. To address this, we propose a novel image conditional prompt learning approach to continual learning method which is inspired by humans’ working memory system. Our approach eliminates the need for data storage buffers and prompt pools for continual learning. Instead, we just need to train only the lightweight MLP to generate prompts without training the entire model. Leveraging CLIP-based models allows us to align vision and text, facilitating comprehensive multi-modal learning. Also, our approach uses regularization and knowledge distillation to retain knowledge while adapting to new tasks.
Advisors
예종철researcher
Description
한국과학기술원 :김재철AI대학원,
Publisher
한국과학기술원
Issue Date
2023
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 김재철AI대학원, 2023.8,[iii, 29 p. :]

Keywords

지속 가능한 학습▼a비전-언어 사전 훈련▼a정규화▼a지식 증류 기법▼a프롬프트 학습; Continual learning▼aVision-language pre-training▼aRegularization▼aKnowledge distillation learning▼aPrompt learning

URI
http://hdl.handle.net/10203/320535
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1045723&flag=dissertation
Appears in Collection
AI-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0