Project overview
As Large Language Models (LLMs) become increasingly sophisticated, emerging use cases threaten professions that have so far escaped the threat of automation, including psychotherapy, social services, and legal counsel. Adding to concerns about the impact of LLMs on professionals, the benefits and risks of this potential change are poorly understood. This project will examine the legal soundness and social acceptability of embedding LLMs in the workflow of legal professionals. We will do so in a series of experiments that seek to test the trustworthiness of LLM-generated legal advice, aiming to improve understanding of how to make generative AI more responsible.
Staff
Lead researchers
Other researchers
Research outputs
Tina Seabrooke, Eike Schneiders, Liz Dowthwaite, Joshua Krook, Natalie Leesakul, Jeremie Clos, Horia Maior & Joel Fischer,
2024
Type: conference
Joshua Krook, Jennifer Williams, Tina Seabrooke, Eike Schneiders, Jan Blockx, Stuart E Middleton & Sarvapali Ramchurn,
2023
DOI: 10.5258/SOTON/P1126
Type: other