About the project
In the wake of growing data privacy concerns and the enactment of the GDPR, Federated Learning (FL) has emerged as a leading privacy-preserving technology in Machine Learning. Despite its advancements, FL systems are not immune to privacy breaches due to the inherent memorisation capabilities of deep learning models. Such vulnerabilities expose FL systems to various privacy attacks, making the study of privacy in distributed settings increasingly complex and vital.
This project explores the dynamics of attack methods, such as Membership Inference and Property Inference, and defensive techniques, like Differential Privacy and Machine Unlearning, in Federated Learning environments. It also identifies potential synergies across disciplines. The project's outcomes will improve the security, dependability, and trustworthiness of AI applications.
The project will be conducted in collaboration with an interdisciplinary team, including industry experts and academics from the following universities:
- University of Birmingham
- Newcastle University
- University of Cambridge
- National University of Singapore
Candidates may choose from, but are not limited to, the following research topics:
- Machine Unlearning for AI applications based on tabular data (Machine Unlearning is a novel privacy-preserving technology)
- Machine Unlearning for Federated Learning systems
- Privacy attacks in Machine/Federated Learning (If you are more interested in conducting attacks rather than defences, refer to The Impact of Adversarial Attacks on Federated Learning: A Survey)
- Federated Learning for Smart Home applications
- Adversarial attacks on Large Language Models