Andrew McCarthy Andrew6.Mccarthy@uwe.ac.uk
Senior Lecturer in Cyber Security
Methods for improving robustness against adversarial machine learning attacks
McCarthy, Andrew
Authors
Abstract
Machine learning systems can improve the efficiency of real-world tasks, including in the cyber security domain; however, these models are susceptible to adversarial attacks; indeed, an arms race exists between adversaries and defenders. The benefits of these systems have been accepted without fully considering their vulnerabilities, resulting in the deployment of vulnerable machine learning models in adversarial environments. For example, intrusion detection systems are relied upon to accurately discern between malicious and benign traffic but can be fooled into allowing malware onto a networks. Robustness is the stability of performance in well-trained models facing adversarial examples. This thesis tackles the urgent problem of improving the robustness of machine learning models, enabling safer deployments in adversarial domains. The logical outputs of this research are countermeasures against adversarial examples. Original contributions to knowledge are: a survey of adversarial machine learning in the cyber security domain, a generalizable approach for feature vulnerability and robustness assessment, and a constraint-based method of generating transferable functionality-preserving adversarial examples in an intrusion detection domain. Novel defences against adversarial examples are presented: Feature selection with recursive feature elimination, and hierarchical classification. Machine learning classifiers can be used in both visual and non-visual domains. Most research in adversarial machine learning considers the visual domain. A primary focus of this work is how adversarial attacks can be effectively used in non-visual domains, such as cyber security. For example, attackers may exploit weaknesses in an intrusion detection system classifier, enabling an intrusion to masquerade as benign traffic. Easily fooled systems are of limited use in critical areas such as cyber security. In future, more sophisticated adversarial attacks could be used by ransomware and malware authors to evade detection by machine learning Intrusion Detection Systems.
Experiments in this thesis focus on intrusion detection case studies and use Python code and Python libraries: the CleverHans API, and the Adversarial Robustness Toolkit libraries to generate adversarial examples, and the HiClass library to facilitate Hierarchical Classification. An adversarial arms race is playing out in cyber security. Every time defences are improved, adversaries find new ways to breach networks. Currently, one of the most critical holes in defences are adversarial examples. This thesis examines the problem of robustness against adversarial examples for machine learning systems and contributes novel countermeasures, aiming to enable the deployment of machine learning in critical domains.
Thesis Type | Thesis |
---|---|
Deposit Date | Mar 1, 2023 |
Publicly Available Date | Aug 25, 2023 |
Public URL | https://uwe-repository.worktribe.com/output/10492055 |
Award Date | Aug 25, 2023 |
Files
Methods for improving robustness against adversarial machine learning attacks
(31.3 Mb)
PDF
You might also like
Shouting through letterboxes: A study on attack susceptibility of voice assistants
(2020)
Presentation / Conference Contribution
Feature vulnerability and robustness assessment against adversarial machine learning attacks
(2021)
Presentation / Conference Contribution
Downloadable Citations
About UWE Bristol Research Repository
Administrator e-mail: repository@uwe.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2025
Advanced Search