DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | 예종철 | - |
dc.contributor.author | Hwang, Hyunmin | - |
dc.contributor.author | 황현민 | - |
dc.date.accessioned | 2024-07-25T19:30:45Z | - |
dc.date.available | 2024-07-25T19:30:45Z | - |
dc.date.issued | 2023 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1045723&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/320535 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 김재철AI대학원, 2023.8,[iii, 29 p. :] | - |
dc.description.abstract | Deep learning has achieved remarkable performance in several areas, but fine-tuning pre-trained models based on new data can lead to catastrophic forgetting. To address this, we propose a novel image conditional prompt learning approach to continual learning method which is inspired by humans’ working memory system. Our approach eliminates the need for data storage buffers and prompt pools for continual learning. Instead, we just need to train only the lightweight MLP to generate prompts without training the entire model. Leveraging CLIP-based models allows us to align vision and text, facilitating comprehensive multi-modal learning. Also, our approach uses regularization and knowledge distillation to retain knowledge while adapting to new tasks. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | 지속 가능한 학습▼a비전-언어 사전 훈련▼a정규화▼a지식 증류 기법▼a프롬프트 학습 | - |
dc.subject | Continual learning▼aVision-language pre-training▼aRegularization▼aKnowledge distillation learning▼aPrompt learning | - |
dc.title | Continual learning using image-conditional prompt for vision-language pre-training | - |
dc.title.alternative | 이미지 조건부 프롬프트를 사용한 비전-텍스트 교육의 지속적인 학습 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :김재철AI대학원, | - |
dc.contributor.alternativeauthor | Ye, Jong Chul | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.