Constructing Multi-Modal Dialogue Dataset by Replacing Text with Semantically Relevant Images

Cited 2 time in webofscience Cited 0 time in scopus
  • Hit : 139
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLee, Nyoungwooko
dc.contributor.authorShin, Suwonko
dc.contributor.authorChoo, Jaegulko
dc.contributor.authorChoi, Ho-Jinko
dc.contributor.authorMyaeng, Sung-Hyunko
dc.date.accessioned2021-11-01T06:41:54Z-
dc.date.available2021-11-01T06:41:54Z-
dc.date.created2021-10-27-
dc.date.created2021-10-27-
dc.date.issued2021-08-
dc.identifier.citationJoint Conference of 59th Annual Meeting of the Association-for-Computational-Linguistics (ACL) / 11th International Joint Conference on Natural Language Processing (IJCNLP) / 6th Workshop on Representation Learning for NLP (RepL4NLP), pp.897 - 906-
dc.identifier.urihttp://hdl.handle.net/10203/288485-
dc.description.abstractIn multi-modal dialogue systems, it is important to allow the use of images as part of a multi-turn conversation. Training such dialogue systems generally requires a large-scale dataset consisting of multi-turn dialogues that involve images, but such datasets rarely exist. In response, this paper proposes a 45k multimodal dialogue dataset created with minimal human intervention. Our method to create such a dataset consists of (1) preparing and pre-processing text dialogue datasets, (2) creating image-mixed dialogues by using a text-to-image replacement technique, and (3) employing a contextual-similarity-based filtering step to ensure the contextual coherence of the dataset. To evaluate the validity of our dataset, we devise a simple retrieval model for dialogue sentence prediction tasks. Automatic metrics and human evaluation results on such tasks show that our dataset can be effectively used as training data for multi-modal dialogue systems which require an understanding of images and text in a context-aware manner. Our dataset and generation code is available at https://github.com/shh1574/multi-modal-dialogue-dataset.-
dc.languageEnglish-
dc.publisherASSOC COMPUTATIONAL LINGUISTICS-ACL-
dc.titleConstructing Multi-Modal Dialogue Dataset by Replacing Text with Semantically Relevant Images-
dc.typeConference-
dc.identifier.wosid000694699200113-
dc.identifier.scopusid2-s2.0-85122195136-
dc.type.rimsCONF-
dc.citation.beginningpage897-
dc.citation.endingpage906-
dc.citation.publicationnameJoint Conference of 59th Annual Meeting of the Association-for-Computational-Linguistics (ACL) / 11th International Joint Conference on Natural Language Processing (IJCNLP) / 6th Workshop on Representation Learning for NLP (RepL4NLP)-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationELECTR NETWORK-
dc.contributor.localauthorChoi, Ho-Jin-
dc.contributor.nonIdAuthorLee, Nyoungwoo-
dc.contributor.nonIdAuthorShin, Suwon-
dc.contributor.nonIdAuthorChoo, Jaegul-
dc.contributor.nonIdAuthorMyaeng, Sung-Hyun-
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 2 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0