NASH: on structured pruning for encoder-decoder language models인코더 디코더 언어모델 경량화를 위한 구조적 가지치기

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 3
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisor윤세영-
dc.contributor.authorPark, Seungjoon-
dc.contributor.author박승준-
dc.date.accessioned2024-07-25T19:30:48Z-
dc.date.available2024-07-25T19:30:48Z-
dc.date.issued2023-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1045737&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/320549-
dc.description학위논문(석사) - 한국과학기술원 : 김재철AI대학원, 2023.8,[v, 30 p. :]-
dc.description.abstractEven though generative language models are getting popular, previous pruning studies only focused on the pruning for encoder-only models rather than generative language models. This paper investigates the considerations for structured pruning on encoder-decoder models, one of the generative language models. First, we demonstrate that the straightforward application of existing structured pruning methods to encoder-decoder models is ineffective regarding inference acceleration. In addition, we suggest two design philosophies to be considered when applying structured pruning to the encoder-decoder models: 1) the decoder depth and encoder width are the essential factor for accelerating inference and enhancing output quality, respectively 2) mitigating the training instability is important. Based on the philosophies, we propose a novel framework called NASH\,(NArrow encoder SHallow decoder) to accelerate inference of the encoder-decoder model. Extensive experiments on diverse generation and inference tasks validate the effectiveness of our method in both speedup and output quality. NASH offers a practical and efficient solution for accelerating encoder-decoder language models, enhancing their deployability in resource-constrained environments.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subject자연어처리▼a언어 모델▼a경량화▼a가지치기-
dc.subjectNatural language processing▼aLanguage model▼aModel compression▼aPruning-
dc.titleNASH: on structured pruning for encoder-decoder language models-
dc.title.alternative인코더 디코더 언어모델 경량화를 위한 구조적 가지치기-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :김재철AI대학원,-
dc.contributor.alternativeauthorYun, Seyoung-
Appears in Collection
AI-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0