About the project
This project addresses malicious energy draining attacks (which significantly increase power consumption of deep learning algorithms) and contributes to energy-efficient artificial Intelligence (AI). You will have opportunities for collaboration in academia and industry, including Cambridge, Microsoft, Nvidia, ARM, and Google DeepMind.
Build on sponge attacks, this project will look into:
- energy draining, a type of Denial-of-Service attack, and countermeasures
- novel deep Neural Networks (NN) architectures for energy efficiency and security
- energy footprints enable a novel approach to the development of more sophisticated AI governance tools.
Sponge examples are a category of input samples that cause Machine Learning (ML) models to exhibit abnormally high energy consumption or significant latency during inference. Conversely, these samples provide attackers with an opportunity to exploit them as side-channel information.
From an attacker standpoint, the project's goal is to harness this side-channel data alongside other attacks, such as model inversion, to potentially extract model architectures and even model parameters. We are also interested in determining if these attacks remain effective on potentially game-changing model architectures like Kolmogorov-Arnold Networks (KAN) and neuromorphic models.
From the defender's perspective, the energy footprint of executing a particular model could act as its fingerprint. Despite substantial efforts to develop appropriate governance methods, the opaque 'black box' nature of many AI-based systems leads to information asymmetries between developers, users, and policymakers, constraining effective governance.
The project will also investigate how these unique performance or energy traces can serve as fingerprints to develop more sophisticated AI governance tools.
Apart from the University of Southampton's supervisors, this project has the following external supervisor:
- Dr Aaron Zhao, Imperial College London