NASH: on structured pruning for encoder-decoder language models인코더 디코더 언어모델 경량화를 위한 구조적 가지치기

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 4
  • Download : 0
Even though generative language models are getting popular, previous pruning studies only focused on the pruning for encoder-only models rather than generative language models. This paper investigates the considerations for structured pruning on encoder-decoder models, one of the generative language models. First, we demonstrate that the straightforward application of existing structured pruning methods to encoder-decoder models is ineffective regarding inference acceleration. In addition, we suggest two design philosophies to be considered when applying structured pruning to the encoder-decoder models: 1) the decoder depth and encoder width are the essential factor for accelerating inference and enhancing output quality, respectively 2) mitigating the training instability is important. Based on the philosophies, we propose a novel framework called NASH\,(NArrow encoder SHallow decoder) to accelerate inference of the encoder-decoder model. Extensive experiments on diverse generation and inference tasks validate the effectiveness of our method in both speedup and output quality. NASH offers a practical and efficient solution for accelerating encoder-decoder language models, enhancing their deployability in resource-constrained environments.
Advisors
윤세영researcher
Description
한국과학기술원 :김재철AI대학원,
Publisher
한국과학기술원
Issue Date
2023
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 김재철AI대학원, 2023.8,[v, 30 p. :]

Keywords

자연어처리▼a언어 모델▼a경량화▼a가지치기; Natural language processing▼aLanguage model▼aModel compression▼aPruning

URI
http://hdl.handle.net/10203/320549
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1045737&flag=dissertation
Appears in Collection
AI-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0