Research project

Relational accountability in using AI for immigration and refugee decision-making: distrust regulation and trustworthiness demonstration

  • Research groups:
  • Lead researchers:
  • Status:
    Not active

Project overview

WSI Pilot Project
Global migration and refugee experiences signify a humanitarian crisis. Even well-intentioned policymakers sometimes hasten to perceive new technologies as a quick fix to a complex problem. The EU, US, and Canada has invested in AI algorithms to automate border control, asylum and visa applications, refugee resettlement etc (see for example, Molnar et al., 2018). Unfortunately, interdisciplinary studies on responsible and trustworthy AI in the UK remain limited. The use of AI in immigration and refugee decision-making risks turning an already highly discretionary system into a testing ground for high-risk technological experiments (Roberts and Faith, 2022; Faith et al., 2022; Madianou, 2019a, 2019b). Vulnerable and under-resourced communities, such as refugees, have less human rights safeguards and resources to defend those rights. Adopting new technologies in a biased manner may only contribute to exacerbate existing1 inequalities.

1 https://policyoptions.irpp.org/fr/magazines/october-2018/governments-use-of-ai-in-immigration-andrefugee-system-needs-oversight/

Staff

Lead researchers

Dr Ai Yu

Associate Professor

Research interests

  • Gender, body and identity in work practices
  • Power, ethics and accountability in organisational settings
  • Ethnography
Connect with Ai

Collaborating research institutes, centres and groups

Research outputs