Haas and Fisher – The evolution of targeted killing practices: Autonomous weapons, future conflict, and the international order

This week we begin our discussions of autonomous weapon systems. Following on from the discussions of the Group of Governmental Experts at the UN last November, more talks are taking place in February and April this year. For those not aware, an autonomous weapon system is that which can select and engage targets without human intervention – think a drone with the brain of The Terminator.

First, we are looking at ‘The evolution of targeted killing practices: Autonomous weapons, future conflict, and the international order’ by Michael Carl Haas and Sophie-Charlotte Fischer from Contemporary Security Policy, 38:2 (2017), 281–306. Feel free to check the article out and let us know what you think in the comments below.

Here’s what we thought:


I enjoyed this article, and the ways in which it seeks to engage with the future applications of AWS in what we might describe as ‘conventional’ wars with the use of targeted killings or ‘assassinations’ by drone likely to become more common.

From my own research perspective I am particularly interested in the authors’ approach to autonomy and autonomous thinking in machines (see 284 onwards). I agree with the authors that ‘the concept of “autonomy” remains poorly understood’ (285), but suggest that perhaps here the academic community has become too caught up in machinic autonomy. If we can’t first understand human autonomy, how can we hope to apply a human framework to our understanding of machines? This question to me, seems to be one that has been under-represented in academic thinking in this area, and is one I may well have to write a paper on!

Finally, I’d like to briefly mention the question of human vs machinic command and control. I was interested to see that the authors suggest AWS might not become ubiquitous in ‘conventional’ conflicts when we consider the advantages and disadvantages of their use for military commanders (297). To me, there is a question here of at what point does machinic intelligence or machine-thinking ‘trump’ the human? Certainly our technology as it stands to date still puts the human as superior in many types of thinking, yet I can’t believe that it will be too long before computers start to totally outsmart humans such that this will even remain a question.  There is also then the question of ‘training cost’. In a drawn out conflict, what will be easier and cheaper to produce: a robot fighter who will be already pre-programmed with training and so on, or the human soldier who requires an investment of time and resources, and who may never quite take on his or her ‘programming’ to the same level as the machine. Something to think about certainly…

Mike Ryder, Lancaster University


I quite liked this piece, as it is common to hear fellow researchers of autonomous weapons say that such systems will change warfare but then provide no discussion of how this will happen. Fortunately, this paper does just that. I particularly liked the idea that use of autonomous systems for ‘decapitation’ strikes against senior military, political, or terrorist leaders/influencers could not only reduce collateral damage overall, and the number of friendly deaths, but also the level of destruction a conflict could have in general. Indeed, I’ve heard a number of people suggest that present-day drones offer a chance at ‘perfect’ distinction, in that they are so precise that the person aimed at is almost always the person who dies with often little collateral damage. It is usually poor intelligence analysis that results in the wrong person being targeted in the first place that is responsible for the unfortunately high number of civilian deaths in the ‘drone wars’. Use of AI could rectify this, but also the use of autonomous weapons could reduce the need for substantial intelligence analysis if they were one day capable of identifying combatant status of ordinary fighters, and of identifying specific high-level personalities through facial or iris recognition. If this becomes possible, autonomous weapons could have the strategic impact of a nuclear bomb against the enemy fighters, without causing much collateral damage.

Joshua Hughes, Lancaster University


Let us know what you think in the comments below

4 thoughts on “Haas and Fisher – The evolution of targeted killing practices: Autonomous weapons, future conflict, and the international order

  1. Apologies for the late reply. Just a few thoughts on this. I very much enjoyed Haas & Fischer’s work, it was well written and very informative. The practice of targeted killings using drones as a counterterrorism measure is well documented. I have never been convinced of the legality of targeted killing policies, both in terms of IHL/IHRL, but also as a violation of due process and contrary to accepted notions of justice. Targeted killings have often been counterproductive and resulted in needless civilian losses, especially so-called “signature strikes”. The idealist in me would rather they did not occur, especially when capture is a viable option. However, the realist in me accepts that such policies are here to stay. I accept that autonomy in weapons systems is progressing rapidly, and that it is conceivable that human beings can at some stage “teach” these weapons systems how to deploy lethal force in a manner compliant with established legal principles. That being said, I would have concerns about introducing an element of autonomy (however ‘autonomy’ is ultimately defined at the CCW!) into the process of targeted killing. I think that would be to introduce an unstable element into what can be as much a political decision as an operational or strategic one. If we consider that many drone strikes have taken place against members of non-state armed groups who do not mark themselves out as combatants by, say, wearing a uniform, and who often mingle with the civilian populations in urban centres, it would be fair to imagine the scenario of an autonomous targeted killing in similar circumstances, and in my view the technology just does not yet come up to the mark to ensure proper compliance with the law – in terms of distinction, proportionality and precautions. In the event that the targeted individual indicates a desire and willingness to surrender, could the system be programmed to recognise this? If not, are we looking at a denial of quarter? Even if technology does reach a high level of sophistication (and I believe it will in good time), the practice of targeted killing is so controversial that I don’t think there should ever be a lessening of human control or oversight.
    As a side issue, I hope there is some movement at the CCW this year. The CCW has been seized of this issue since 2013, and some progress by the GGE, even on settling the definition issue, would be welcome.

    Liked by 2 people

    1. Hi Stephen, thanks for your response. I’ve also been thinking about the use of autonomy in the process of selecting targets in a targeted killing process. There are obviously lots of issues in delegating life-and-death decisions on a battlefield, and potentially even more when the decisions are being made after suspected terrorist action, and before any strike. Perhaps bringing in the same sorts of bias we see in algorithms being used for advising judges on sentencing?


  2. I would tend to agree. Bias is a human trait, part of our flawed human nature. I’m not sure humans would ever be able to create an algorithm that is completely unbiased. Who knows…

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s