Defending against poisoning attacks in federated learning with blockchain
File(s)
Author(s)
Type
Journal Article
Abstract
In the era of deep learning, federated learning (FL) presents a promising approach that allows multi-institutional data owners, or clients, to collaboratively train machine learning models without compromising data privacy. However, most existing FL approaches rely on a centralized server for global model aggregation, leading to a single point of failure. This makes the system vulnerable to malicious attacks when dealing with dishonest clients. In this work, we address this problem by proposing a secure and reliable FL system based on blockchain and distributed ledger technology. Our system incorporates a peer-to-peer voting mechanism and a reward-and-slash mechanism, which are powered by on-chain smart contracts, to detect and deter malicious behaviors. Both theoretical and empirical analyses are presented to demonstrate the effectiveness of the proposed approach, showing that our framework is robust against malicious client-side behaviors.
Date Issued
2024-07
Date Acceptance
2024-03-08
Citation
IEEE Transactions on Artificial Intelligence, 2024, 5 (7), pp.3743-3756
ISSN
2691-4581
Publisher
Institute of Electrical and Electronics Engineers
Start Page
3743
End Page
3756
Journal / Book Title
IEEE Transactions on Artificial Intelligence
Volume
5
Issue
7
Copyright Statement
© 2024 The Authors. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.
For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
Identifier
https://ieeexplore.ieee.org/abstract/document/10471193
Publication Status
Published
Date Publish Online
2024-03-18