Repository logo
  • Log In
    Log in via Symplectic to deposit your publication(s).
Repository logo
  • Communities & Collections
  • Research Outputs
  • Statistics
  • Log In
    Log in via Symplectic to deposit your publication(s).
  1. Home
  2. Faculty of Engineering
  3. Computing
  4. Computing PhD theses
  5. Indiscriminate data poisoning against supervised learning: general attack formulations, robust defences, and poisonability
 
  • Details
Indiscriminate data poisoning against supervised learning: general attack formulations, robust defences, and poisonability
File(s)
Carnerero-Cano-J-2024-PhD-Thesis.pdf (5.97 MB)
Thesis
Author(s)
Carnerero Cano, Javier
Type
Thesis or dissertation
Abstract
Machine learning (ML) systems often rely on data collected from untrusted sources, such as humans or sensors, that can be compromised. These scenarios expose ML algorithms to data poisoning attacks, where adversaries manipulate a fraction of the training data to degrade the ML system performance. However, previous works lack a systematic evaluation of attacks considering the ML pipeline, and focus on classification settings. This is concerning since regression models are also applied in safety-critical systems.

We characterise indiscriminate data poisoning attacks and defences in worst-case scenarios against supervised learning algorithms, considering the ML pipeline: data sanitisation, hyperparameter learning, and training.

We propose a novel attack formulation that considers the effect of the attack on the model's hyperparameters. We apply this attack formulation to several ML classifiers using L2 and L1 regularisation. Our evaluation shows the benefits of using regularisation to help mitigate poisoning attacks, when hyperparameters are learnt using a trusted dataset.

We then introduce a threat model for poisoning attacks against regression models, and propose a novel stealthy attack formulation via multiobjective bilevel optimisation, where the two objectives are attack effectiveness and detectability. We experimentally show that state-of-the-art defences do not mitigate these stealthy attacks. Furthermore, we theoretically justify the detectability objective and methodology designed. We also propose a novel defence, built upon Bayesian linear regression, that rejects points based on the model's predictive variance. We empirically show its effectiveness to mitigate stealthy attacks and attacks with a large fraction of poisoning points.

Finally, we introduce the concept of "poisonability", which allows us to find the number of poisoning points required so that the mean error of the clean points matches the mean error of the poisoning points on the poisoned model. This challenges the underlying assumption of most defences. Specifically, we determine the poisonability of linear regression.
Version
Open Access
Date Issued
2023-12
Date Awarded
2024-07
URI
http://hdl.handle.net/10044/1/113890
DOI
https://doi.org/10.25560/113890
Copyright Statement
Creative Commons Attribution NonCommercial Licence
License URL
https://creativecommons.org/licenses/by-nc/4.0/
Advisor
Lupu, Emil
Muñoz González, Luis
Sponsor
Defence Science and Technology Laboratory (Great Britain)
Grant Number
DSTLX-1000120987
Publisher Department
Computing
Publisher Institution
Imperial College London
Qualification Level
Doctoral
Qualification Name
Doctor of Philosophy (PhD)
About
Spiral Depositing with Spiral Publishing with Spiral Symplectic
Contact us
Open access team Report an issue
Other Services
Scholarly Communications Library Services
logo

Imperial College London

South Kensington Campus

London SW7 2AZ, UK

tel: +44 (0)20 7589 5111

Accessibility Modern slavery statement Cookie Policy

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback