Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection

File Description SizeFormat 
1802.03041v1.pdfAccepted version377.62 kBAdobe PDFView/Open
Title: Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection
Authors: Paudice, A
Muñoz-González, L
Gyorgy, A
Lupu, EC
Item Type: Working Paper
Abstract: Machine learning has become an important component for many systems and applications including computer vision, spam filtering, malware and network intrusion detection, among others. Despite the capabilities of machine learning algorithms to extract valuable information from data and produce accurate predictions, it has been shown that these algorithms are vulnerable to attacks. Data poisoning is one of the most relevant security threats against machine learning systems, where attackers can subvert the learning process by injecting malicious samples in the training data. Recent work in adversarial machine learning has shown that the so-called optimal attack strategies can successfully poison linear classifiers, degrading the performance of the system dramatically after compromising a small fraction of the training dataset. In this paper we propose a defence mechanism to mitigate the effect of these optimal poisoning attacks based on outlier detection. We show empirically that the adversarial examples generated by these attack strategies are quite different from genuine points, as no detectability constrains are considered to craft the attack. Hence, they can be detected with an appropriate pre-filtering of the training dataset.
Issue Date: 31-Dec-2018
Copyright Statement: © The Authors
Keywords: stat.ML
Notes: 10 pages, 3 figures
Appears in Collections:Faculty of Engineering

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Creative Commonsx