DC Field | Value | Language |
---|---|---|
dc.contributor.author | Ko, Bowon | ko |
dc.contributor.author | Choi, Ho-Jin | ko |
dc.date.accessioned | 2020-05-27T02:20:20Z | - |
dc.date.available | 2020-05-27T02:20:20Z | - |
dc.date.created | 2020-05-25 | - |
dc.date.created | 2020-05-25 | - |
dc.date.issued | 2020-04 | - |
dc.identifier.citation | ELECTRONICS LETTERS, v.56, no.9, pp.444 - 446 | - |
dc.identifier.issn | 0013-5194 | - |
dc.identifier.uri | http://hdl.handle.net/10203/274323 | - |
dc.description.abstract | In this Letter, the authors introduce a novel approach to learn representations for sentence-level paraphrase identification (PI) using BERT and ten natural language processing tasks. Their method trains an unsupervised model called BERT with two different tasks to detect whether two sentences are in paraphrase relation or not. Unlike conventional BERT, which fine tunes the target task such as PI to pre-trained BERT, twice fine-tuning deep neural networks first fine tune each task (e.g. general language understanding evaluation tasks, question answering, and paraphrase adversaries from word scrambling task) and second fine tune target PI task. As a result, the multi-fine-tuned BERT model outperformed the fine-tuned model only with Microsoft Research Paraphrase Corpus (MRPC), which is paraphrase data, except for one case of Stanford Sentiment Treebank - 2 (SST-2). Multi-task fine-tuning is a simple idea but experimentally powerful. Experiments show that fine-tuning just PI tasks to the BERT already gives enough performance, but additionally, fine-tuning similar tasks can affect performance (3.4% point absolute improvement) and be competitive with the state-of-the-art systems. | - |
dc.language | English | - |
dc.publisher | INST ENGINEERING TECHNOLOGY-IET | - |
dc.title | Twice fine-tuning deep neural networks for paraphrase identification | - |
dc.type | Article | - |
dc.identifier.wosid | 000530281100011 | - |
dc.identifier.scopusid | 2-s2.0-85084281629 | - |
dc.type.rims | ART | - |
dc.citation.volume | 56 | - |
dc.citation.issue | 9 | - |
dc.citation.beginningpage | 444 | - |
dc.citation.endingpage | 446 | - |
dc.citation.publicationname | ELECTRONICS LETTERS | - |
dc.identifier.doi | 10.1049/el.2019.4183 | - |
dc.contributor.localauthor | Choi, Ho-Jin | - |
dc.contributor.nonIdAuthor | Ko, Bowon | - |
dc.description.isOpenAccess | N | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | neural nets | - |
dc.subject.keywordAuthor | text analysis | - |
dc.subject.keywordAuthor | data analysis | - |
dc.subject.keywordAuthor | learning (artificial intelligence) | - |
dc.subject.keywordAuthor | natural language processing | - |
dc.subject.keywordAuthor | fine-tuning deep neural networks | - |
dc.subject.keywordAuthor | general language understanding evaluation tasks | - |
dc.subject.keywordAuthor | paraphrase adversaries | - |
dc.subject.keywordAuthor | word scrambling task | - |
dc.subject.keywordAuthor | fine tune target PI task | - |
dc.subject.keywordAuthor | multifine-tuned BERT model | - |
dc.subject.keywordAuthor | fine-tuned model | - |
dc.subject.keywordAuthor | paraphrase data | - |
dc.subject.keywordAuthor | multitask fine-tuning | - |
dc.subject.keywordAuthor | fine-tuning similar tasks | - |
dc.subject.keywordAuthor | sentence-level paraphrase identification | - |
dc.subject.keywordAuthor | natural language processing tasks | - |
dc.subject.keywordAuthor | paraphrase relation | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.