DarkneTZ: towards model privacy at the edge using trusted execution
environments
environments
File(s)2004.05703v1.pdf (1.29 MB)
Working paper
Author(s)
Type
Working Paper
Abstract
We present DarkneTZ, a framework that uses an edge device's Trusted Execution
Environment (TEE) in conjunction with model partitioning to limit the attack
surface against Deep Neural Networks (DNNs). Increasingly, edge devices
(smartphones and consumer IoT devices) are equipped with pre-trained DNNs for a
variety of applications. This trend comes with privacy risks as models can leak
information about their training data through effective membership inference
attacks (MIAs). We evaluate the performance of DarkneTZ, including CPU
execution time, memory usage, and accurate power consumption, using two small
and six large image classification models. Due to the limited memory of the
edge device's TEE, we partition model layers into more sensitive layers (to be
executed inside the device TEE), and a set of layers to be executed in the
untrusted part of the operating system. Our results show that even if a single
layer is hidden, we can provide reliable model privacy and defend against state
of the art MIAs, with only 3% performance overhead. When fully utilizing the
TEE, DarkneTZ provides model protections with up to 10% overhead.
Environment (TEE) in conjunction with model partitioning to limit the attack
surface against Deep Neural Networks (DNNs). Increasingly, edge devices
(smartphones and consumer IoT devices) are equipped with pre-trained DNNs for a
variety of applications. This trend comes with privacy risks as models can leak
information about their training data through effective membership inference
attacks (MIAs). We evaluate the performance of DarkneTZ, including CPU
execution time, memory usage, and accurate power consumption, using two small
and six large image classification models. Due to the limited memory of the
edge device's TEE, we partition model layers into more sensitive layers (to be
executed inside the device TEE), and a set of layers to be executed in the
untrusted part of the operating system. Our results show that even if a single
layer is hidden, we can provide reliable model privacy and defend against state
of the art MIAs, with only 3% performance overhead. When fully utilizing the
TEE, DarkneTZ provides model protections with up to 10% overhead.
Date Issued
2020-04-12
Citation
2020
Publisher
arXiv
Copyright Statement
© 2020 The Author(s)
Sponsor
Engineering & Physical Science Research Council (EPSRC)
Engineering & Physical Science Research Council (E
Identifier
http://arxiv.org/abs/2004.05703v1
Grant Number
EP/N028260/2
RGS128099 (EP/R03351X/1)
Subjects
cs.LG
cs.LG
cs.CR
stat.ML
Notes
13 pages, 8 figures, accepted to ACM MobiSys 2020
Publication Status
Published