Policy Projects
Learn about PPS led current collaborative projects between the University of Southampton researchers and UK Government and Parliament.
29 November 2021
Juljan Krause
They’re a far cry from science fiction caricatures, but these systems are on the rise and present challenges
In James Cameron’s legendary 1991 sci-fi action sequel, the Terminator returns to destroy Skynet, a network that would give an army of killer robots consciousness: fully autonomous lethal androids that are about to eradicate humankind from the face of the earth. Terminator 2 is a classic example of a popular narrative that equates “autonomy” in defence systems with consciousness and, above all, bad intent.
The reality of building trusted autonomous systems for defence and security is a very different one, as I discuss in a new report for the UKRI TAS-Hub, which provides an overview of the key issues and challenges. This research forms part of the £33m Trustworthy Autonomous Systems Programme, which aims to help develop socially beneficial autonomous systems that are both trustworthy in principle and trusted in practice by individuals, society, and government. While ethical concerns about autonomous killer robots occupy much of public discourse, fully autonomous and independently operating systems of this kind are not on the horizon. Military strategists and service personnel are not too keen on them anyway for they would imply a significant loss of capabilities for human decision makers to intervene. Merciless robots that turn on their inventors are thankfully still the stuff of science fiction.
In practice, “autonomy” in the defence context means coupling human decision-making with machine intelligence. The key question for defence and security policymakers is to decide just how much autonomy and independence they want self-learning systems to have. This question carries many important policy implications; some of the most pressing ones are discussed in my review.
On a general level, autonomous systems in defence (or systems with autonomous capabilities, to put it more accurately) present challenges to the international community. Should there be a new treaty or amendments to international humanitarian law to rein in the diffusion of novel systems that have some degree of autonomy, applied not only for the purpose of reconnaissance but also for potentially selecting and cueing human targets? Many NGOs and civil society actors believe that a new form of international consensus is indeed required to rule out scenarios where machines make life-or-death decisions. However, western powers such as the US and the UK maintain that decisions of this kind will always be for human operators to make; they will not be delegated to machines. And because human decision-making in this domain is already fully covered by existing regulations and legislation, many countries do not see the requirement for a new treaty.
Over the coming years, moral and ethical positions in this space are likely to evolve, against the backdrop of a rapidly changing geopolitical landscape. What if adversaries manage to build systems with autonomous capabilities of a kind that are at odds with a rules-based international order? While public opinion in western societies is largely in favour of banning lethal autonomous systems, the empirical picture gets more complicated if hypothetical scenarios are considered where domestic forces come under attack from an adversary’s autonomous arsenal. Policy questions around the prudent use of these systems and their non-proliferation are likely to become even more urgent over the coming years.
And of course, there are plenty of policy concerns regarding the practicalities of implementing such systems. The Ministry of Defence in the UK and the Department of Defense in the US are putting considerable resources into engineering novel ways of human-machine teaming. What kind of education and training will service personnel require to make future missions a success? Relying on new, partially autonomous, systems potentially puts the lives of service personnel at risk, and so the issue of how to build trust among service personnel is of paramount importance. Moreover, there are also concerns about safety, national security, and intellectual property rights if components of these new systems are developed by non-domestic industries. They may not replace human decision-making and they may not be fully autonomous, but nevertheless, defence applications of systems with autonomous capabilities come with a plethora of new and complicated policy questions.
Learn about PPS led current collaborative projects between the University of Southampton researchers and UK Government and Parliament.
Click here to listen to our Policy Podcast series. In each episode we speak to UoS researchers and experts, about their experiences confronting critical issues in the domestic and foreign policies.
Guidance on the many channels available to researchers to engage with policymakers.
Guidance on things to consider in the science to policy process and useful tips in planning and costing your impact activities.