PPFL: privacy-preserving federated learning with trusted execution environments
File(s)2104.14380v1.pdf (1.03 MB)
Accepted version
Author(s)
Type
Conference Paper
Abstract
We propose and implement a Privacy-preserving Federated Learning (PPFL)
framework for mobile systems to limit privacy leakages in federated learning.
Leveraging the widespread presence of Trusted Execution Environments (TEEs) in
high-end and mobile devices, we utilize TEEs on clients for local training, and
on servers for secure aggregation, so that model/gradient updates are hidden
from adversaries. Challenged by the limited memory size of current TEEs, we
leverage greedy layer-wise training to train each model's layer inside the
trusted area until its convergence. The performance evaluation of our
implementation shows that PPFL can significantly improve privacy while
incurring small system overheads at the client-side. In particular, PPFL can
successfully defend the trained model against data reconstruction, property
inference, and membership inference attacks. Furthermore, it can achieve
comparable model utility with fewer communication rounds (0.54x) and a similar
amount of network traffic (1.002x) compared to the standard federated learning
of a complete model. This is achieved while only introducing up to ~15% CPU
time, ~18% memory usage, and ~21% energy consumption overhead in PPFL's
client-side.
framework for mobile systems to limit privacy leakages in federated learning.
Leveraging the widespread presence of Trusted Execution Environments (TEEs) in
high-end and mobile devices, we utilize TEEs on clients for local training, and
on servers for secure aggregation, so that model/gradient updates are hidden
from adversaries. Challenged by the limited memory size of current TEEs, we
leverage greedy layer-wise training to train each model's layer inside the
trusted area until its convergence. The performance evaluation of our
implementation shows that PPFL can significantly improve privacy while
incurring small system overheads at the client-side. In particular, PPFL can
successfully defend the trained model against data reconstruction, property
inference, and membership inference attacks. Furthermore, it can achieve
comparable model utility with fewer communication rounds (0.54x) and a similar
amount of network traffic (1.002x) compared to the standard federated learning
of a complete model. This is achieved while only introducing up to ~15% CPU
time, ~18% memory usage, and ~21% energy consumption overhead in PPFL's
client-side.
Date Issued
2021-06-24
Date Acceptance
2021-06-01
Citation
2021, pp.94-108
Publisher
ACM
Start Page
94
End Page
108
Copyright Statement
© 2021 Copyright held by the owner/author(s). Publication rights licensed to ACM.
Sponsor
Engineering & Physical Science Research Council (E
Engineering & Physical Science Research Council (EPSRC)
Engineering & Physical Science Research Council (EPSRC)
Engineering & Physical Science Research Council (E
Engineering & Physical Science Research Council (E
Engineering & Physical Science Research Council (E
Identifier
http://arxiv.org/abs/2104.14380v1
Grant Number
EP/R511547/1
EP/N028260/2
EP/R0222091/1
RGS128099 (EP/R03351X/1)
PO: 20213246 (Ref: 301671)
EP/V502354/1
Source
Mobile Systems, Applications, and Services conference
Subjects
cs.CR
cs.CR
cs.DC
cs.LG
Notes
15 pages, 8 figures, accepted to MobiSys 2021
Publication Status
Published
Start Date
2021-06-24
Coverage Spatial
New YorkNYUnited States
Date Publish Online
2021-06-24