Mind control attack: Undermining deep learning with GPU memory exploitation

Cited 7 time in webofscience Cited 2 time in scopus
  • Hit : 798
  • Download : 0
Modern deep learning frameworks rely heavily on GPUs to accelerate the computation. However, the security implication of GPU device memory exploitation on deep learning frameworks has been largely neglected. In this paper, we argue that GPU device memory manipulation is a novel attack vector against deep learning systems. We present a novel attack method leveraging the attack vector, which makes deep learning predictions no longer different from random guessing by degrading the accuracy of the predictions. To the best of our knowledge, we are the first to show a practical attack that directly exploits deep learning frameworks through GPU memory manipulation. We confirmed that our attack works on three popular deep learning frameworks, TensorFlow, CNTK, and Caffe, running on CUDA. Finally, we propose potential defense mechanisms against our attack, and discuss concerns of GPU memory safety. (c) 2020 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/)
Publisher
ELSEVIER ADVANCED TECHNOLOGY
Issue Date
2021-03
Language
English
Article Type
Article
Citation

COMPUTERS & SECURITY, v.102

ISSN
0167-4048
DOI
10.1016/j.cose.2020.102115
URI
http://hdl.handle.net/10203/281609
Appears in Collection
CS-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 7 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0