One-Shot Exemplification Modeling via Latent Sense Representations

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 37
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorHarvill, Johnko
dc.contributor.authorYoon, Hee Sukko
dc.contributor.authorYoon, Eunseopko
dc.contributor.authorHasegawa Johnson, Markko
dc.contributor.authorYoo, Chang-Dongko
dc.date.accessioned2023-11-21T09:04:06Z-
dc.date.available2023-11-21T09:04:06Z-
dc.date.created2023-11-21-
dc.date.issued2023-07-13-
dc.identifier.citationth Workshop on Representation Learning for NLP, RepL4NLP 2023, co-located with ACL 2023, pp.303 - 314-
dc.identifier.urihttp://hdl.handle.net/10203/314975-
dc.description.abstractExemplification modeling is a recently proposed task that aims to produce a viable sentence using a target word that takes on a specific meaning. This task can be particularly challenging for polysemous words since they can have multiple meanings. In this paper, we propose a one-shot variant of the exemplification modeling task such that labeled data is not needed during training, making it possible to train our system using a raw text corpus. Given one example at test time, our proposed approach can generate diverse and fluent examples where the target word accurately matches its intended meaning. We compare our approach to a fully-supervised baseline trained with different amounts of data and focus our evaluation on polysemous words. We use both automatic and human evaluations to demonstrate how each model performs on both seen and unseen words. Our proposed approach performs similarly to the fully-supervised baseline despite not using labeled data during training.-
dc.languageEnglish-
dc.publisherAssociation for Computational Linguistics (ACL)-
dc.titleOne-Shot Exemplification Modeling via Latent Sense Representations-
dc.typeConference-
dc.identifier.scopusid2-s2.0-85174541839-
dc.type.rimsCONF-
dc.citation.beginningpage303-
dc.citation.endingpage314-
dc.citation.publicationnameth Workshop on Representation Learning for NLP, RepL4NLP 2023, co-located with ACL 2023-
dc.identifier.conferencecountryCN-
dc.identifier.conferencelocationToronto-
dc.identifier.doi10.18653/v1/2023.repl4nlp-1.25-
dc.contributor.localauthorYoo, Chang-Dong-
dc.contributor.nonIdAuthorHarvill, John-
dc.contributor.nonIdAuthorHasegawa Johnson, Mark-
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0