Privacy-preserving deep-learning framework based on a trusted execution environment사용자 정보보호를 위한 신뢰실행환경 기반 딥러닝 프레임워크

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 492
  • Download : 0
Deep-learning (DL) is receiving huge attention as enabling techniques for emerging mobile and IoT applications. It is a common practice to conduct DNN model-based inference using cloud services due to their high computation and memory cost. However, such a cloud-offloaded inference raises serious privacy concerns. Malicious external attackers or untrustworthy internal administrators of clouds may leak highly sensitive and private data such as image, voice and textual data. In this paper, we propose Occlumency, a novel cloud-driven solution designed to protect user privacy without compromising the benefit of using powerful cloud resources to run highly-accurate large DNN models at low latency. Occlumency leverages secure SGX enclave to preserve the confidentiality and the integrity of user data throughout the entire DL inference process. DL inference in SGX enclave, however, impose a severe performance challenge. Our motivational study shows that a naive DL inference inside SGX is 6.4x slower compared to the native environment. To accelerate DL inference inside the enclave, we designed a suite of novel techniques to overcome the limited physical memory space and inefficient page swapping of the enclave, which are the leading causes of the performance degradation. We implemented Occlumency based on Caffe in both Linux and Windows environments. Our experiment with various DNN models shows that Occlumency improves inference speed by 3.6x compared to the baseline DL inference in SGX and achieves a secure DL inference within 72% of latency overhead compared to inference in the native environment.
Advisors
Song, Joonhwaresearcher송준화researcher
Description
한국과학기술원 :전산학부,
Publisher
한국과학기술원
Issue Date
2019
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전산학부, 2019.2,[iv, 41 p. :]

Keywords

Mobile/IoT deep-learning▼aprivacy▼atrusted execution environment▼acloud offloading▼aresource-efficient deep-learning inference; 모바일 및 웨어러블 딥러닝▼a사용자 프라이버시▼a신뢰실행환경▼a클라우드 오프로딩▼a자원 효율적 심층신경망 추론

URI
http://hdl.handle.net/10203/267054
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=843533&flag=dissertation
Appears in Collection
CS-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0