Textual Backdoor Attack for the Text Classification System

Cited 6 time in webofscience Cited 0 time in scopus
  • Hit : 123
  • Download : 54
Deep neural networks provide good performance for image recognition, speech recognition, text recognition, and pattern recognition. However, such networks are vulnerable to backdoor attacks. In a backdoor attack, normal data that do not include a specific trigger are correctly classified by the target model, but backdoor data that include the trigger are incorrectly classified by the target model. One advantage of a backdoor attack is that the attacker can use a specific trigger to attack at a desired time. In this study, we propose a backdoor attack targeting the BERT model, which is a classification system designed for use in the text domain. Under the proposed method, the model is additionally trained on a backdoor sentence that includes a specific trigger, and afterward, if the trigger is attached before or after an original sentence, it will be misclassified by the model. In our experimental evaluation, we used two movie review datasets (MR and IMDB). The results show that using the trigger word "ATTACK" at the beginning of an original sentence, the proposed backdoor method had a 100% attack success rate when approximately 1.0% and 0.9% of the training data consisted of backdoor samples, and it allowed the model to maintain an accuracy of 86.88% and 90.80% on the original samples in the MR and IMDB datasets, respectively.</p>
Publisher
WILEY-HINDAWI
Issue Date
2021-10
Language
English
Article Type
Article
Citation

SECURITY AND COMMUNICATION NETWORKS, v.2021

ISSN
1939-0114
DOI
10.1155/2021/2938386
URI
http://hdl.handle.net/10203/289207
Appears in Collection
RIMS Journal Papers
Files in This Item
122359.pdf(2.8 MB)Download
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 6 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0