Research project

RAI UK International Partnership: AI Regulation Assurance for Safety-Critical Systems

Project overview

Safety-critical systems that use artificial intelligence (AI) can pose a variety of challenges and opportunities. This class of AI systems especially come with the risk of real consequential harms. Working across borders in the UK, US, and Australia, our team will deeply explore AI safety risks for technologies from three sectors: aerospace, maritime, and communication. From these case studies, we will characterise technical and regulatory gaps with proposed solutions that can mitigate safety risks. Bridging the wider scientific community with international government stakeholders will allow us to positively impact how AI in safety-critical systems is developed and regulated.

Staff

Lead researchers

Dr Jennifer Williams

Lecturer

Research interests

  • Responsible and trustworthy audio processing applied to a variety of domains and use-cases;
  • Audio AI safety in terms of usability, privacy, and security;
  • Ethical issues of trust for audio AI (deepfake detection, voice-related rights, and speaker and content privacy). 
Connect with Jennifer

Collaborating research institutes, centres and groups

Research outputs