Postgraduate research project

Interpretable Models: Bringing Transparency to Deep Learning Decision-Making

Funding
Fully funded (UK only)
Type of degree
Doctor of Philosophy
Entry requirements
2:1 honours degree View full entry requirements
Faculty graduate school
Faculty of Engineering and Physical Sciences
Closing date

About the project

To trust Deep Learning models in sensitive applications such as autonomous vehicles or healthcare, we need to better understand which information they rely on when making decisions. In this project we set out to create novel tools to solve this crucial step towards AI Safety.

It is widely acknowledged that Deep Learning (DL) can be a very successful tool. It is also widely acknowledged that DL models can be flawed and biased in ways that are not straightforward to detect. It is therefore important to build methods that help humans detect cases when models are likely to fail when deployed in real-world applications. The community is increasingly aware that existing methods for interpreting the decisions made by DL models (e.g. GradCAM, Integrated Gradients) are incorrect. This project sets out to redefine information importance and address the fundamental limitation of existing interpretation methods. This has the potential to be a step change in our ability to understand and trust learning machines. 
 
Why join our team? 
Innovation We are dedicated to doing good science first and foremost. You will conduct systematic and rigorous research with a focus on creating new insights.   
Development We will help you develop a wide range of skills and publish in the top conferences. Based on your interests, we can also provide opportunities for collaborative projects, external placements, or internships.   
Ownership Discover and integrate your research vision into the project and make a real impact.  
Community We value cooperation and foster individual respect. Engage in group activities in an intellectually stimulating environment.