Research project

Verifiably Safe and Trusted Human-AI Systems (VESTAS)

Project overview

Replacing human decision making with machine decision making results in challenges associated with stakeholders’ trust in autonomous systems. To develop verifiably safe and trustworthy human- AI as autonomous systems, it is key to understand how different stakeholders perceive an autonomous system as trusted and trustworthy, and how the context of application affects their perceptions.

Interdisciplinary (socio-technical) approaches, grounded in social science (trust) and computer science (safety), are often under-used in AI systems investigations and the VESTAS project aims to provide a contextual (different domains of application) and inclusive (different categories of stakeholders) view to develop this interdisciplinary research agenda.

VESTAS initially focuses on identifying and engaging with different categories of stakeholders as well as safety-trust challenges. Secondly, the awareness levels of potential solutions, in different categories of stakeholders are analysed; and thirdly discursive techniques are identified to develop and improve the general awareness of potential solution concepts.

Based on the output of the above phases, VESTAS intends to provide a roadmap on challenges and technical requirements to be addressed for the design and development of verifiably safe and trusted autonomous human-AI systems. This roadmap includes guidelines that different stakeholders can utilise.

Staff

Lead researchers

Dr Asieh Salehi Fathabadi BSc, PhD

Senior Research Fellow

Research interests

  • Formal Methods
  • Autonomous Systems
  • Responsible AI (Socio-technical)
Connect with Asieh

Dr Vahid Yazdanpanah

Lecturer

Research interests

  • Multiagent (AI) Systems
  • Agent-Based Computing 
  • Responsibility of AI Systems
Connect with Vahid

Professor Pauline Leonard BA Sociology,PGCE, MA(Ed), PhD, FAcSS, FRSA

Associate Dean Research & Enterprise
Connect with Pauline

Collaborating research institutes, centres and groups

Research outputs