Postgraduate research project

Responsibility Modelling in Multiagent Systems

Funding
Competition funded View fees and funding
Type of degree
Doctor of Philosophy
Entry requirements
2:1 honours degree View full entry requirements
Faculty graduate school
Faculty of Engineering and Physical Sciences
Closing date

About the project

Explore the Future of Trustworthy Human-AI Systems. Are you passionate about advancing artificial intelligence with a strong focus on social impact? This PhD project on Responsibility Modelling in Multiagent Systems (MAS) will allow you to make an impactful contribution to the responsible and ethical development of autonomous systems. The goal is to explore how we can define and implement different forms of responsibility, like accountability, blameworthiness, and liability, within MAS. This enables AI agents to work in a way that aligns with social norms, ethical considerations, and regulatory guidelines. 

In this project, you will develop cutting-edge frameworks for responsibility reasoning in MAS. From autonomous vehicles to smart energy distribution and collaborative decision-making in emergency response, these systems must be both technically robust and ethically sound. This project aims to build AI agents that not only perform tasks but do so responsibly, considering social, legal, and moral expectations.  

Key aspects

  • innovative responsibility modelling: define and implement distinct forms of responsibility to ensure trustworthy interactions within and between autonomous systems. 
  • real-world applications: use scenarios in autonomous driving and urban systems to validate your models, demonstrating their practical impact. 
  • supported by established expertise: join a world-leading research group and benefit from the University of Southampton’s reputation in multiagent systems research. 

What you will achieve

Throughout this PhD, you’ll explore how in a multiagent setting,  we can effectively coordinate tasks, assign responsibility, and handle accountability in complex situations. You’ll work hands-on with the latest AI and MAS technologies to develop proof-of-concepts and apply them to societal challenges. By the end of this project, your work could directly inform policy and contribute to frameworks for more ethical AI applications.  We encourage diversity in our research teams and welcome applications from diverse backgrounds.