Generating Multiple-Length Summaries via Reinforcement Learning for Unsupervised Sentence Summarization

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 159
  • Download : 0
Sentence summarization shortens given texts while maintaining core contents of the texts. Unsupervised approaches have been studied to summarize texts without human-written summaries. However, recent unsupervised models are extractive, which remove words from texts and thus they are less flexible than abstractive summarization. In this work, we devise an abstractive model based on reinforcement learning without ground-truth summaries. We formulate the unsupervised summarization based on the Markov decision process with rewards representing the summary quality. To further enhance the summary quality, we develop a multi-summary learning mechanism that generates multiple summaries with varying lengths for a given text, while making the summaries mutually enhance each other. Experimental results show that the proposed model substantially outperforms both abstractive and extractive models, yet frequently generating new words not contained in input texts.
Publisher
Association for Computational Linguistics (ACL)
Issue Date
2022-12
Language
English
Citation

2022 Findings of the Association for Computational Linguistics: EMNLP 2022, pp.2939 - 2951

URI
http://hdl.handle.net/10203/305999
Appears in Collection
IE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0