Towards continual knowledge learning of language models언어 모델의 지속적인 지식 학습

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 3
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisor서민준-
dc.contributor.authorJang, Joel-
dc.contributor.author장요엘-
dc.date.accessioned2024-07-25T19:30:49Z-
dc.date.available2024-07-25T19:30:49Z-
dc.date.issued2023-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1045744&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/320556-
dc.description학위논문(석사) - 한국과학기술원 : 김재철AI대학원, 2023.8,[ii, 31 p. :]-
dc.description.abstractLarge Language Models (LMs) are known to encode world knowledge in their parameters as they pretrain on a vast amount of web corpus, which is often utilized for performing knowledge-dependent downstream tasks such as question answering, fact-checking, and open dialogue. In real-world scenarios, the world knowledge stored in the LMs can quickly become outdated as the world changes, but it is non-trivial to avoid catastrophic forgetting and reliably acquire new knowledge while preserving invariant knowledge. To push the community towards better maintenance of ever-changing LMs, we formulate a new continual learning (CL) problem called Continual Knowledge Learning (CKL). We construct a new benchmark and metric to quantify the retention of time-invariant world knowledge, the update of outdated knowledge, and the acquisition of new knowledge. We adopt applicable recent methods from literature to create several strong baselines. Through extensive experiments, we find that CKL exhibits unique challenges that are not addressed in previous CL setups, where parameter expansion is necessary to reliably retain and learn knowledge simultaneously. By highlighting the critical causes of knowledge forgetting, we show that CKL is a challenging and important problem that helps us better understand and train ever-changing LMs.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subject지속적인 학습▼a대규모 언어 모델▼a지식 습득▼a치명적인 망각▼a자연어처리-
dc.subjectContinual learning▼aLarge language models▼aKnowledge acquisition▼aCatastrophic forgetting▼aNatural language processing-
dc.titleTowards continual knowledge learning of language models-
dc.title.alternative언어 모델의 지속적인 지식 학습-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :김재철AI대학원,-
dc.contributor.alternativeauthorSeo, Minjoon-
Appears in Collection
AI-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0