Enhancing flexibility and adaptability of bayesian prompt learning in vision-language pretrained model비전-언어 사전 훈련 모델에서 베이지안 프롬프트 학습의 유연성 및 적응성 향상

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 3
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisor문일철-
dc.contributor.authorCho, Youngjae-
dc.contributor.author조영재-
dc.date.accessioned2024-07-30T19:31:03Z-
dc.date.available2024-07-30T19:31:03Z-
dc.date.issued2024-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1096690&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/321473-
dc.description학위논문(석사) - 한국과학기술원 : 산업및시스템공학과, 2024.2,[iv, 23 p. :]-
dc.description.abstractRecent vision-language pre-trained (VLP) models have become the backbone for many downstream tasks, but they are utilized as frozen model without learning. Prompt learning is a method to improve the pre-trained VLP model by adding a learnable context vector to the inputs of the text encoder. In a few-shot learning scenario of the downstream task, MLE training can lead the context vector to over-fit dominant image features in the training data. This overfitting can potentially harm the generalization ability, especially in the presence of a distribution shift between the training and test dataset. This paper presents a Bayesian-based framework of prompt learning, which could alleviate the over-fitting issues on few-shot learning application and increase the adaptability of prompts on unseen instances. Specifically, modeling data-dependent prior enhances the adaptability of text features for both seen and unseen image features without the trade-off of performance between them. Based on the Bayesian framework, we utilize the Wasserstein Gradient Flow in the estimation of our target posterior distribution, which enables our prompt to be flexible in capturing the complex modes of image features. We demonstrate the effectiveness of our method on benchmark datasets for several experiments by showing statistically significant improvements on performance compared to existing methods-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subject프롬프트▼a베이지안▼a사후분포▼a멀티 모달-
dc.subjectPrompt▼aBayesian inference▼aWasserstein gradient flow▼aMulti-modal-
dc.titleEnhancing flexibility and adaptability of bayesian prompt learning in vision-language pretrained model-
dc.title.alternative비전-언어 사전 훈련 모델에서 베이지안 프롬프트 학습의 유연성 및 적응성 향상-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :산업및시스템공학과,-
dc.contributor.alternativeauthorMoon, Il-Chul-
Appears in Collection
IE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0