DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lee, Nyoungwoo | ko |
dc.contributor.author | Shin, Suwon | ko |
dc.contributor.author | Choo, Jaegul | ko |
dc.contributor.author | Choi, Ho-Jin | ko |
dc.contributor.author | Myaeng, Sung-Hyun | ko |
dc.date.accessioned | 2021-11-01T06:41:54Z | - |
dc.date.available | 2021-11-01T06:41:54Z | - |
dc.date.created | 2021-10-27 | - |
dc.date.created | 2021-10-27 | - |
dc.date.issued | 2021-08 | - |
dc.identifier.citation | Joint Conference of 59th Annual Meeting of the Association-for-Computational-Linguistics (ACL) / 11th International Joint Conference on Natural Language Processing (IJCNLP) / 6th Workshop on Representation Learning for NLP (RepL4NLP), pp.897 - 906 | - |
dc.identifier.uri | http://hdl.handle.net/10203/288485 | - |
dc.description.abstract | In multi-modal dialogue systems, it is important to allow the use of images as part of a multi-turn conversation. Training such dialogue systems generally requires a large-scale dataset consisting of multi-turn dialogues that involve images, but such datasets rarely exist. In response, this paper proposes a 45k multimodal dialogue dataset created with minimal human intervention. Our method to create such a dataset consists of (1) preparing and pre-processing text dialogue datasets, (2) creating image-mixed dialogues by using a text-to-image replacement technique, and (3) employing a contextual-similarity-based filtering step to ensure the contextual coherence of the dataset. To evaluate the validity of our dataset, we devise a simple retrieval model for dialogue sentence prediction tasks. Automatic metrics and human evaluation results on such tasks show that our dataset can be effectively used as training data for multi-modal dialogue systems which require an understanding of images and text in a context-aware manner. Our dataset and generation code is available at https://github.com/shh1574/multi-modal-dialogue-dataset. | - |
dc.language | English | - |
dc.publisher | ASSOC COMPUTATIONAL LINGUISTICS-ACL | - |
dc.title | Constructing Multi-Modal Dialogue Dataset by Replacing Text with Semantically Relevant Images | - |
dc.type | Conference | - |
dc.identifier.wosid | 000694699200113 | - |
dc.identifier.scopusid | 2-s2.0-85122195136 | - |
dc.type.rims | CONF | - |
dc.citation.beginningpage | 897 | - |
dc.citation.endingpage | 906 | - |
dc.citation.publicationname | Joint Conference of 59th Annual Meeting of the Association-for-Computational-Linguistics (ACL) / 11th International Joint Conference on Natural Language Processing (IJCNLP) / 6th Workshop on Representation Learning for NLP (RepL4NLP) | - |
dc.identifier.conferencecountry | US | - |
dc.identifier.conferencelocation | ELECTR NETWORK | - |
dc.contributor.localauthor | Choi, Ho-Jin | - |
dc.contributor.nonIdAuthor | Lee, Nyoungwoo | - |
dc.contributor.nonIdAuthor | Shin, Suwon | - |
dc.contributor.nonIdAuthor | Choo, Jaegul | - |
dc.contributor.nonIdAuthor | Myaeng, Sung-Hyun | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.