KILT: a Benchmark for Knowledge Intensive Language Tasks

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 79
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorPetroni, Fabioko
dc.contributor.authorPiktus, Aleksandrako
dc.contributor.authorFan, Angelako
dc.contributor.authorLewis, Patrickko
dc.contributor.authorYazdani, Majidko
dc.contributor.authorDe Cao, Nicolako
dc.contributor.authorThorne, Jamesko
dc.contributor.authorJernite, Yacineko
dc.contributor.authorKarpukhin, Vladimirko
dc.contributor.authorMaillard, Jeanko
dc.contributor.authorPlachouras, Vassilisko
dc.contributor.authorRocktäschel, Timko
dc.contributor.authorRiedel, Sebastianko
dc.date.accessioned2022-12-26T08:04:18Z-
dc.date.available2022-12-26T08:04:18Z-
dc.date.created2022-12-23-
dc.date.issued2021-06-10-
dc.identifier.citation2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, pp.2523 - 2544-
dc.identifier.urihttp://hdl.handle.net/10203/303723-
dc.description.abstractChallenging problems such as open-domain question answering, fact checking, slot filling and entity linking require access to large, external knowledge sources. While some models do well on individual tasks, developing general models is difficult as each task might require computationally expensive indexing of custom knowledge sources, in addition to dedicated infrastructure. To catalyze research on models that condition on specific information in large textual resources, we present a benchmark for knowledge-intensive language tasks (KILT). All tasks in KILT are grounded in the same snapshot of Wikipedia, reducing engineering turnaround through the reuse of components, as well as accelerating research into task-agnostic memory architectures. We test both task-specific and general baselines, evaluating downstream performance in addition to the ability of the models to provide provenance. We find that a shared dense vector index coupled with a seq2seq model is a strong baseline, outperforming more tailor-made approaches for fact checking, open-domain question answering and dialogue, and yielding competitive results on entity linking and slot filling, by generating disambiguated text. KILT data and code are available at https://github.com/facebookresearch/KILT.-
dc.languageEnglish-
dc.publisherAssociation for Computational Linguistics (ACL)-
dc.titleKILT: a Benchmark for Knowledge Intensive Language Tasks-
dc.typeConference-
dc.identifier.scopusid2-s2.0-85137685347-
dc.type.rimsCONF-
dc.citation.beginningpage2523-
dc.citation.endingpage2544-
dc.citation.publicationname2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationVirtual-
dc.contributor.localauthorThorne, James-
dc.contributor.nonIdAuthorPetroni, Fabio-
dc.contributor.nonIdAuthorPiktus, Aleksandra-
dc.contributor.nonIdAuthorFan, Angela-
dc.contributor.nonIdAuthorLewis, Patrick-
dc.contributor.nonIdAuthorYazdani, Majid-
dc.contributor.nonIdAuthorDe Cao, Nicola-
dc.contributor.nonIdAuthorJernite, Yacine-
dc.contributor.nonIdAuthorKarpukhin, Vladimir-
dc.contributor.nonIdAuthorMaillard, Jean-
dc.contributor.nonIdAuthorPlachouras, Vassilis-
dc.contributor.nonIdAuthorRocktäschel, Tim-
dc.contributor.nonIdAuthorRiedel, Sebastian-
Appears in Collection
AI-Conference Papers(학술대회논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0