About
The technology landscape for audio AI is changing rapidly. The stakes are raised when we consider that voice and audio will soon be used in almost every facet of daily life. My work is dedicated to supporting the development of audio AI that is responsible and trustworthy, and helping to ensure audio technology is fit for AI regulation.
Research
Research groups
Research interests
- Responsible and trustworthy audio processing applied to a variety of domains and use-cases;
- Audio AI safety in terms of usability, privacy, and security;
- Ethical issues of trust for audio AI (deepfake detection, voice-related rights, and speaker and content privacy).
Current research
Dr Williams conducts research in responsible and trustworthy audio processing applied to a variety of domains and use-cases. Along these lines, her research also addresses issues of audio AI safety in terms of usability, privacy, and security. These issues overlap with procesing in edge devices (e.g., ultra low-power devices). Her work addresses ethical issues of trust for audio AI, spanning a breadth of topics such as: deepfake detection, voice-related rights, and speaker and content privacy.
Research projects
Active projects
Completed projects
Publications
Pagination
Teaching
Speech Signal Processing
Machine Learning / Deep Learning
Natural Language Processing
Ethics and Controversies in Speech / NLP
External roles and responsibilities
Biography
Dr Jennifer Williams is an Assistant Professor in Electronics and Computer Science, and an Associate Scientific Advisor for the Alan Turing Institute BridgeAI programme. Her research explores creation of trustworthy, private, and secure speech/audio solutions. Dr Williams leads several large interdisciplinary projects through the UKRI Trustworthy Autonomous Systems Hub (TAS Hub) including voice anonymisation, trustworthy audio, speech paralinguistics for medical applications, and AI regulation. She also leads an RAI UK International Partnership with the UK, US, and Australia on "AI Regulation Assurance for Safety-Critical Systems" across sectors. She completed her PhD at the University of Edinburgh on representation learning for speech signal disentanglement, and showed this approach is valuable for a variety of speech technology applications (voice conversion, speech synthesis, anti-spoofing, naturalness assessment, and privacy). Before her doctoral work, she was a technical staff member at MIT Lincoln Laboratory for five years where she developed rapid prototyping solutions for text and speech technology. She has conducted research in Tokyo, Japan, the United States, and Singapore. She is a Senior Member of IEEE, member of ISCA (International Speech Communication Association), served as a committee member of the ISCA-PECRAC group (Postdoctoral and Early Career Research Advisory Committee), and is a reviewer and organiser for multiple conferences involving AI, text, speech, and multimedia. Dr Williams is also Chair of the ISCA special interest group on Security and Privacy in Speech Communication (SPSC-SIG). She served as Senior Chair for the Interspeech 2023 Special Sessions, and Area Chair for Interspeech 2023 Paralinguistics. Dr Williams is also a member of the NIST-OSAC subcommittee on speaker recognition for forensic science.
Prizes
- Honourable Mention Award at the TAS Symposium 2023: TAME Pain: Trustworthy AssessMEnt of Pain from speech and audio for the empowerment of patients (2023)