Synergy with Translation Artifacts for Training and Inference in Multilingual Tasks

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 204
  • Download : 0
Translation has played a crucial role in improving the performance on multilingual tasks: (1) to generate the target language data from the source language data for training and (2) to generate the source language data from the target language data for inference. However, prior works have not considered the use of both translations simultaneously. This paper shows that combining them can synergize the results on various multilingual sentence classification tasks. We empirically find that translation artifacts stylized by translators are the main factor of the performance gain. Based on this analysis, we adopt two training methods, SupCon and MixUp, considering translation artifacts. Furthermore, we propose a cross-lingual fine-tuning algorithm called MUSC, which uses SupCon and MixUp jointly and improves the performance. Our code is available at https://github.com/jongwooko/MUSC.
Publisher
EMNLP
Issue Date
2022-12-09
Language
English
Citation

Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, pp.6747 - 6754

URI
http://hdl.handle.net/10203/302993
Appears in Collection
AI-Conference Papers(학술대회논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0