Discrete Prompt Optimization via Constrained Generation for Zero-shot Re-ranker

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 52
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorCho, Sukminko
dc.contributor.authorJeong, Soyeongko
dc.contributor.authorSeo, Jeongyeonko
dc.contributor.authorPark, Jong-Cheolko
dc.date.accessioned2023-11-14T10:03:25Z-
dc.date.available2023-11-14T10:03:25Z-
dc.date.created2023-11-13-
dc.date.issued2023-07-10-
dc.identifier.citation61st Annual Meeting of the Association for Computational Linguistics, ACL 2023, pp.960 - 971-
dc.identifier.urihttp://hdl.handle.net/10203/314659-
dc.description.abstractRe-rankers, which order retrieved documents with respect to the relevance score on the given query, have gained attention for the information retrieval (IR) task. Rather than fine-tuning the pre-trained language model (PLM), the large-scale language model (LLM) is utilized as a zero-shot re-ranker with excellent results. While LLM is highly dependent on the prompts, the impact and the optimization of the prompts for the zero-shot re-ranker are not explored yet. Along with highlighting the impact of optimization on the zero-shot re-ranker, we propose a novel discrete prompt optimization method, Constrained Prompt generation (Co-Prompt), with the metric estimating the optimum for re-ranking. Co-Prompt guides the generated texts from PLM toward optimal prompts based on the metric without parameter update. The experimental results demonstrate that Co-Prompt leads to outstanding re-ranking performance against the baselines. Also, Co-Prompt generates more interpretable prompts for humans against other prompt optimization methods.-
dc.languageEnglish-
dc.publisherAssociation for Computational Linguistics (ACL)-
dc.titleDiscrete Prompt Optimization via Constrained Generation for Zero-shot Re-ranker-
dc.typeConference-
dc.identifier.scopusid2-s2.0-85175229467-
dc.type.rimsCONF-
dc.citation.beginningpage960-
dc.citation.endingpage971-
dc.citation.publicationname61st Annual Meeting of the Association for Computational Linguistics, ACL 2023-
dc.identifier.conferencecountryCN-
dc.identifier.conferencelocationToronto-
dc.contributor.localauthorPark, Jong-Cheol-
dc.contributor.nonIdAuthorSeo, Jeongyeon-
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0