Project overview
Safety-critical systems that use artificial intelligence (AI) can pose a variety of challenges and opportunities. This class of AI systems especially come with the risk of real consequential harms. Working across borders in the UK, US, and Australia, our team will deeply explore AI safety risks for technologies from three sectors: aerospace, maritime, and communication. From these case studies, we will characterise technical and regulatory gaps with proposed solutions that can mitigate safety risks. Bridging the wider scientific community with international government stakeholders will allow us to positively impact how AI in safety-critical systems is developed and regulated.
Staff
Lead researchers