Bayesian multi-task transfer learning for soft prompt tuning소프트 프롬프트 튜닝을 위한 베이지안 멀티태스크 전이학습

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 3
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisor김기응-
dc.contributor.authorLee, Haeju-
dc.contributor.author이해주-
dc.date.accessioned2024-07-25T19:30:42Z-
dc.date.available2024-07-25T19:30:42Z-
dc.date.issued2023-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1045709&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/320521-
dc.description학위논문(석사) - 한국과학기술원 : 김재철AI대학원, 2023.8,[iv, 23 p. :]-
dc.description.abstractPrompt tuning, in which prompts are optimized to adapt large-scale pre-trained language models to downstream tasks instead of fine-tuning the full model parameters, has been shown to be particularly effective when the prompts are trained in the multi-task transfer learning setting. These methods generally involve individually training prompts for each source task and then aggregating them to provide the initialization of the prompt for the target task. However, this approach critically ignores the fact that some of the source tasks could be negatively or positively interfering with each other. We argue that when we extract knowledge from source tasks via training source prompts, we need to consider this correlation among source tasks for better transfer to target tasks. To this end, we propose a Bayesian approach where we work with the posterior distribution of prompts across source tasks. We obtain representative source prompts corresponding to the samples from the posterior utilizing Stein Variational Gradient Descent, which are then aggregated to constitute the initial target prompt. We show extensive experimental results on the standard benchmark NLP tasks, where our Bayesian multi-task transfer learning approach outperforms the state-of-the-art methods in many settings. Furthermore, our approach requires no auxiliary models other than the prompt itself, achieving high degree of parameter-efficiency.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subject프롬프트 튜닝▼a전이학습▼a베이지안-
dc.subjectPrompt tuning▼aTransfer learning▼aBayesian method-
dc.titleBayesian multi-task transfer learning for soft prompt tuning-
dc.title.alternative소프트 프롬프트 튜닝을 위한 베이지안 멀티태스크 전이학습-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :김재철AI대학원,-
dc.contributor.alternativeauthorKim, Kee-Eung-
Appears in Collection
AI-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0