Distilling Linguistic Context for Language Model Compression

Cited 6 time in webofscience Cited 0 time in scopus
  • Hit : 105
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorPark, GeonDoko
dc.contributor.authorKim, Gyeongmanko
dc.contributor.authorYang, Eunhoko
dc.date.accessioned2021-12-14T06:51:04Z-
dc.date.available2021-12-14T06:51:04Z-
dc.date.created2021-11-30-
dc.date.created2021-11-30-
dc.date.created2021-11-30-
dc.date.issued2021-11-07-
dc.identifier.citationConference on Empirical Methods in Natural Language Processing (EMNLP), pp.364 - 378-
dc.identifier.urihttp://hdl.handle.net/10203/290611-
dc.description.abstractA computationally expensive and memory intensive neural network lies behind the recent success of language representation learning. Knowledge distillation, a major technique for deploying such a vast language model in resource-scarce environments, transfers the knowledge on individual word representations learned without restrictions. In this paper, inspired by the recent observations that language representations are relatively positioned and have more semantic knowledge as a whole, we present a new knowledge distillation objective for language representation learning that transfers the contextual knowledge via two types of relationships across representations: Word Relation and Layer Transforming Relation. Unlike other recent distillation techniques for the language models, our contextual distillation does not have any restrictions on architectural changes between teacher and student. We validate the effectiveness of our method on challenging benchmarks of language understanding tasks, not only in architectures of various sizes, but also in combination with DynaBERT, the recently proposed adaptive size pruning method.-
dc.languageEnglish-
dc.publisherAssociation for Computational Linguistics-
dc.titleDistilling Linguistic Context for Language Model Compression-
dc.typeConference-
dc.identifier.wosid000855966300030-
dc.identifier.scopusid2-s2.0-85123474186-
dc.type.rimsCONF-
dc.citation.beginningpage364-
dc.citation.endingpage378-
dc.citation.publicationnameConference on Empirical Methods in Natural Language Processing (EMNLP)-
dc.identifier.conferencecountryDR-
dc.identifier.conferencelocationOnline & Barcelo Bavaro Convention Centre, Punta Cana-
dc.contributor.localauthorYang, Eunho-
Appears in Collection
AI-Conference Papers(학술대회논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 6 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0