This week we begin our discussions of autonomous weapon systems. Following on from the discussions of the Group of Governmental Experts at the UN last November, more talks are taking place in February and April this year. For those not aware, an autonomous weapon system is that which can select and engage targets without human intervention – think a drone with the brain of The Terminator.
First, we are looking at ‘The evolution of targeted killing practices: Autonomous weapons, future conflict, and the international order’ by Michael Carl Haas and Sophie-Charlotte Fischer from Contemporary Security Policy, 38:2 (2017), 281–306. Feel free to check the article out and let us know what you think in the comments below.
Here’s what we thought:
I enjoyed this article, and the ways in which it seeks to engage with the future applications of AWS in what we might describe as ‘conventional’ wars with the use of targeted killings or ‘assassinations’ by drone likely to become more common.
From my own research perspective I am particularly interested in the authors’ approach to autonomy and autonomous thinking in machines (see 284 onwards). I agree with the authors that ‘the concept of “autonomy” remains poorly understood’ (285), but suggest that perhaps here the academic community has become too caught up in machinic autonomy. If we can’t first understand human autonomy, how can we hope to apply a human framework to our understanding of machines? This question to me, seems to be one that has been under-represented in academic thinking in this area, and is one I may well have to write a paper on!
Finally, I’d like to briefly mention the question of human vs machinic command and control. I was interested to see that the authors suggest AWS might not become ubiquitous in ‘conventional’ conflicts when we consider the advantages and disadvantages of their use for military commanders (297). To me, there is a question here of at what point does machinic intelligence or machine-thinking ‘trump’ the human? Certainly our technology as it stands to date still puts the human as superior in many types of thinking, yet I can’t believe that it will be too long before computers start to totally outsmart humans such that this will even remain a question. There is also then the question of ‘training cost’. In a drawn out conflict, what will be easier and cheaper to produce: a robot fighter who will be already pre-programmed with training and so on, or the human soldier who requires an investment of time and resources, and who may never quite take on his or her ‘programming’ to the same level as the machine. Something to think about certainly…
Mike Ryder, Lancaster University
I quite liked this piece, as it is common to hear fellow researchers of autonomous weapons say that such systems will change warfare but then provide no discussion of how this will happen. Fortunately, this paper does just that. I particularly liked the idea that use of autonomous systems for ‘decapitation’ strikes against senior military, political, or terrorist leaders/influencers could not only reduce collateral damage overall, and the number of friendly deaths, but also the level of destruction a conflict could have in general. Indeed, I’ve heard a number of people suggest that present-day drones offer a chance at ‘perfect’ distinction, in that they are so precise that the person aimed at is almost always the person who dies with often little collateral damage. It is usually poor intelligence analysis that results in the wrong person being targeted in the first place that is responsible for the unfortunately high number of civilian deaths in the ‘drone wars’. Use of AI could rectify this, but also the use of autonomous weapons could reduce the need for substantial intelligence analysis if they were one day capable of identifying combatant status of ordinary fighters, and of identifying specific high-level personalities through facial or iris recognition. If this becomes possible, autonomous weapons could have the strategic impact of a nuclear bomb against the enemy fighters, without causing much collateral damage.
Joshua Hughes, Lancaster University
UPDATE: added 18th March 2019, written earlier
This article presents predictions on the impact of autonomous weapons on the future of conflict. Building on a ‘functional view’ of autonomy that distinguishes degrees of autonomy across different functional areas, such as ‘health management’, ‘battlefield intelligence’ and ‘the use of force’, the authors discuss the issues and incentives of applying different degrees to different functions. They also detail the US’ ongoing drone campaigns before extrapolating the trends seen within into a future of greater weapon autonomy. First, they see an increased focus on ‘leadership targeting’, believing that ‘autonomous weapons would be a preferred means of executing counter-leadership strikes, including targeted killings.’ Secondly, they propose such tactics as a necessary response to the resurgence of ‘hybrid warfare’, with ‘[a]ttacking leadership targets in-theatre…be[ing] perceived as a viable and effective alternative to an expansion of the conflict into the heartland of an aggressive state opponent’. The authors conclude with their belief that ‘advanced Western military forces’ “command philosophies” will militate against the employment of autonomous weapons, which require surrendering human control, in some types of targeted killing scenarios.
I found the article to have a rather unexpected utopian takeaway. Where a previous author proposed that a shift to swarm warfare would make ‘mass once again…a decisive factor on the battlefield’, this paper’s predict the development of a more scapel-like approach of targeted leadership killings. The thought of generals and politicians being make immediately responsible for their military adventures, rather than however many other citizens (and auxiliaries) they can place between them and their enemies, seems like a rather egalitarian development of statecraft. It reminded me, of all things, of the scene in Fahrenheit 9/11 in which the director asks pro-war congressmen to enlist their own children in the Army and is met with refusal. It’s easier to command others to fight and die on you and your government’s behalf, but the advent of the nuclear age presented the first time in which the generals had just as much ‘skin in the game’ as everyone else, and nukes remain unused. Perhaps this future of leadership targetting by tiny drones can achieve the same result, but without taking the rest of us along for the apocalyptic ride. The risk of a small quadcopter loaded with explosives flying through one’s office window seems like it would be a strong incentive for peacemaking, a potentially welcome by-product of the reduction of the ‘tyranny of distance’ (or, rather, the obviation of insulation) that the earlier author had discussed.
Ben Goldsworthy, Lancaster University
Let us know what you think in the comments below