Machine learning for security and security for machine learning
File(s)
Author(s)
Zizzo, Giulio
Type
Thesis or dissertation
Abstract
In this thesis we analyse test time adversarial examples for machine learning in security domains. First, we consider adversarial examples for autoregressive machine learning models employed for intrusion detection. We consider industrial control systems (ICS) as a use case, and develop attack algorithms which can successfully overcome the domain specific challenges found in ICS. We test our attack on a ICS dataset and demonstrate that an adversary can evade state of the art intrusion detection systems. Secondly, we analyse threats posed in federated learning which offer adversaries new ways to subvert defensive algorithms. Specifically, we are interested in the interaction of adversarial training with federated learning. To that end, we examine adversarial training when under convergence attacks, and when subject to a novel attack objective which stealthily produces a brittle version of adversarial training. We perform initial validation on benchmark image datasets, and then consider malware detection as a security domain in which there is strong motivation to use federated learning. For our final contribution, we switch to the defender's perspective and develop an algorithm called Deep Latent Defence. Our algorithm analyses the intermediate representation of data as it travels through a neural network. We show this offers strong defensive performance even against adaptive adversaries.
Version
Open Access
Date Issued
2021-04
Date Awarded
2021-09
Copyright Statement
Creative Commons Attribution NonCommercial Licence
Advisor
Hankin, Christopher
Sponsor
Engineering and Physical Sciences Research Council
Airbus Industrie
Publisher Department
Computing
Publisher Institution
Imperial College London
Qualification Level
Doctoral
Qualification Name
Doctor of Philosophy (PhD)