Instructive Decoding: Instruction-Tuned Large Language Models are Self-Refiner from Noisy Instructions

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 7
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorTaehyeon Kimko
dc.contributor.authorJoonkee Kimko
dc.contributor.authorGihun Leeko
dc.contributor.authorYun, Seyoungko
dc.date.accessioned2024-07-16T08:00:12Z-
dc.date.available2024-07-16T08:00:12Z-
dc.date.created2024-07-16-
dc.date.issued2024-05-08-
dc.identifier.citationThe Twelfth International Conference on Learning Representations-
dc.identifier.urihttp://hdl.handle.net/10203/320264-
dc.publisherICLR-
dc.titleInstructive Decoding: Instruction-Tuned Large Language Models are Self-Refiner from Noisy Instructions-
dc.typeConference-
dc.type.rimsCONF-
dc.citation.publicationnameThe Twelfth International Conference on Learning Representations-
dc.identifier.conferencecountryEI-
dc.contributor.localauthorYun, Seyoung-
dc.contributor.nonIdAuthorTaehyeon Kim-
dc.contributor.nonIdAuthorJoonkee Kim-
dc.contributor.nonIdAuthorGihun Lee-
Appears in Collection
AI-Conference Papers(학술대회논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0