Distillation of chain-of-thought reasoning using large language models대형 언어 모델을 활용한 사고 사슬 추론 증류

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 3
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisor윤세영-
dc.contributor.authorHo, Namgyu-
dc.contributor.author허남규-
dc.date.accessioned2024-07-25T19:30:49Z-
dc.date.available2024-07-25T19:30:49Z-
dc.date.issued2023-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1045741&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/320553-
dc.description학위논문(석사) - 한국과학기술원 : 김재철AI대학원, 2023.8,[vi, 44 p. :]-
dc.description.abstractRecent works have shown that chain-of-thought (CoT) prompting can elicit language models to solve complex reasoning tasks, step-by-step. However, prompt-based CoT methods are dependent on very large models such as GPT-3 175B which are prohibitive to deploy at scale. In this paper, we use these large models as reasoning teachers to enable complex reasoning in smaller models and reduce model size requirements by several orders of magnitude. We propose Fine-tune-CoT, a method that generates reasoning samples from very large teacher models to fine-tune smaller models. We evaluate our method on a wide range of public models and complex tasks. We find that Fine-tune-CoT enables substantial reasoning capability in small models, far outperforming prompt-based baselines and even the teacher model in many tasks. Additionally, we extend our method by leveraging the teacher model’s ability to generate multiple distinct rationales for each original sample. Enriching the fine-tuning data with such diverse reasoning results in a substantial performance boost across datasets, even for very small models. We conduct ablations and sample studies to understand the emergence of reasoning capabilities of student models. Our code implementation and data are available at https://github.com/itsnamgyu/reasoning-teacher.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subject대형 언어 모델▼a사슬 추론▼a지식 증류▼a자연어 처리-
dc.subjectLarge language models▼aChain-of-thought reasoning▼aKnowledge distillation▼aNatural language processing-
dc.titleDistillation of chain-of-thought reasoning using large language models-
dc.title.alternative대형 언어 모델을 활용한 사고 사슬 추론 증류-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :김재철AI대학원,-
dc.contributor.alternativeauthorYun, Se-Young-
Appears in Collection
AI-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0