In relation to autonomous weapon systems, how much human control is ‘meaningful’? 

This week we consider what level of human control over killer robots is meaningful. This has been a topic of great discussion at the UN as part of the deliberations about whether or not these systems should be banned. Indeed, Paul Scharre has just written an interesting blog on this very subject, see here. 

 

Here’s what we think: 

 

It’s great that this question should come up on TTAC21 as it’s something I’m particularly interested in at the moment. From my position, human control isn’t really very ‘meaningful’ and hasn’t been for a long time. If anything drone pilots don’t so much represent a lack of control so much as highlighting for us the lack of control, or lack of human agency, that’s been present in the military for a very long time. I mean even go so far back as the Second World War and already technology was starting to take over many of the duties of actually ‘waging war’. Skip on a few years and you get to the nuclear bomb, wherein one single individual ‘presses the button’, though in reality the decision to use the bomb was made many years before and by a great many people. At what point is the single decision to press the red button meaningful? I argue not at all, if the weapon exists alongside the common will to use it. If not pilot A pressing the button, then the military can simply send pilot B or pilot C. And while we’re at it, we better make sure it lands where we tell it to. Better get a machine to do the job… 

 

Mike Ryder, Lancaster University 

 

This question really is an important one. Despite studying international law, perhaps it is more important than the legal questions over AWS. I think the approach which Paul Scharre suggests, that if we had a technologically perfect autonomous weapon system what role would we still want humans to play is a great one. I think it is the question which will lead the international community towards whatever answer they come to in relation to meaningful human control. 

For me, I’m coming to the conclusion that unless an instance of combat is of a high intensity and military personnel from your own side or civilians are going to die without immediate action and the speed of decision-making that only an AWS will have, then it would always be preferable to have a human overseeing lethal decisions, if not actually making them. Whilst the legal arguments can be made convincingly for both no automation and full automation of lethal decision-making, I cautiously argue that where technology has the required capabilities then lethal decision-making by an AWS could be lawful. Ethically however, I would prefer a higher standard which would include humans in the decision-making process. But, ethically desirable is more than ‘meaningful’ and this is why I think Scharre has gotten the jump on the Campaign to Stop Killer Robots; reaching a ‘meaningful’ level of human involvement is a minimum threshold, but ethically desirable can go as high as anybody wants. Of course, this then makes it harder to discuss and so may tied up the CCW discussions for longer – although I hope it will be worth it. 

For me, ‘meaningful’ comes down to a human deciding that characteristics XYZ make an individual worthy of targeting. In an international armed conflict, that might be them wearing the uniform of an adversary. In a non-international armed conflict, it may be that they have acted in such a way to make them an adversary (I.e. directly participating in hostilities). But, that human decision can still be pre-determined and later executed by a machine. The temporal and physical distance does not alter the decision that XYZ characteristics mean that the potential target becomes a definitive target. Others will disagree with my conception of ‘meaningful’, and I hope it will generate discussion, but this is also why I favour Scharre’s method of moving forward. 

Joshua Hughes, Lancaster University 

2 thoughts on “In relation to autonomous weapon systems, how much human control is ‘meaningful’? 

  1. Josh — I agree with you on the need to determine what warrants a ‘meaningful’ level of human involvement, but to follow your logic, if the ‘meaningful’ involvement of a human is to determine what characteristic determines an individual worthy of targeting, then surely that characteristic could just be plugged into a machine and carried out far more effectively than any human could?

    In my own research I’m coming to the point now where I don’t think the problem is the machine so much as the machine’s role in *codifying a decision* and exposing the problematic nature of modern ethics.

    Like

    1. Hi Mike, thanks for your thoughts. I think you’re correct in following the logic to humans plugging in target identifiers – For me, having machines recognise and identify targets which a human has already pre-determined is not problematic as a person in an enemy uniform or an enemy tank is always a legitimate target. But, when fighting rebels, this becomes more difficult because it is actions which determine whether someone is directly participating in hostilities and therefore target-able. Where these actions are clearly adversarial (like raising a weapon), this is less problematic than recognising them as an adversary prior to them engaging in an attack. For me, whilst there may be technological fixes to these issues, the more complex the factors that indicate a legitimate target become, the more preferable it would be to have a human involved.

      I agree that the problem is codifying the decision. Having a person determine that XYZ characteristics equate to an adversary and telling soldiers that these are the criteria they should use to identify enemies is uncontroversial, militaries provide rules of engagement all the time. But, if these same characteristics are programmed into a machine, for some reason we become uneasy with this. I’m not really sure why this is.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s