Generating diverse successive sentences via context learning문맥 학습을 통한 다양한 연속 문장 생성

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 401
  • Download : 0
A number of neural models have been proposed in the literature to generate meaningful and grammatically correct sentences. However, those models have drawbacks such as lack of diversity or inability to generate successive sentences. To apply NLP neural models in real-world tasks like dialogues, simply predicting a next sentence at a time using autoregressive models or sampling sentences from trained generative models is not enough. In this thesis, as an extension of existing sentence generation research, we propose a new approach for generating diverse successive sentences. We base our model in variational auto-encoder (VAE) and combine it with an additional recurrent network for successive context learning. We evaluate formulated sentences with the type token ratio (TTR) measure and show that our model is superior to the base model on diversity. Furthermore, by presenting a method of conditioning our model on another latent codes, we show that it is also possible to change flows of generated sentences differently depending on the given latent code.
Advisors
Chung, Sae-Youngresearcher정세영researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2018
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2018.2,[iii, 17 p. :]

Keywords

sentence generation▼agenerative model▼asuccessive context learning; 문장 생성▼a생성모델▼a연속적 문맥 학습

URI
http://hdl.handle.net/10203/266916
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=734043&flag=dissertation
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0