Autonomous Weapons and International Humanitarian Law OR Killer Robots are Here. Get Used to it – Harris

Here, we discuss ‘Autonomous Weapons and International Humanitarian Law OR Killer Robots are Here. Get Used to it’ by Shane Harris, Temple International and Comparative Law Journal, 2016, Vol.30(1), pp.77-83.

It’s available here.

Essentially, Harris argues two things:

(1) It is inevitable that human beings will build weapons systems capable of killing people on their own, without any human involvement or direction; and (2) It is conceivable that human beings could teach machines to recognize and distinguish when the use of lethal force complies with international humanitarian law.

We al dig into it in our own individual ways, and we have a few different views on this subject. So, hopefully we will start a lively debate.

If you do have any comment, please leave them at the bottom.

Without further a do, here’s Mike:


This article is very much a summation of ‘where we are at’ when it comes to autonomous weapon systems, and the author places killer robots as an inevitability (77), and one that we should perhaps embrace as machines are far more reliable than humans at snap decision making (83).

However I do fundamentally disagree with the notion that robots could (and indeed should) be taught international law, and so kill only when legal to do so. The issue here is one of interpretation and the article would seem to fail to take into account the fact that most modern-day enemies do not mark themselves distinctly as combatants, as their unknowability is the primary advantage that they are able to exercise against a vastly superior military threat. The distinction here is never so clear-cut.

There is also, in my mind, the issue of reciprocity and the expectations associated with combat. Here, war seems to be defined in strictly Western terms, where there is a law of war as such, agreed upon by both sides. But again, terrorists don’t adhere to this structure. With no attributability, there is no stopping a terrorist dressed as a civilian carrying out an atrocity, and no way a robot could interpret that ‘civilian’ as a terrorist within the structures of a strict legal framework. While I do not dispute that robots can theoretically be made more ‘reliable’ than humans, the question for me is what exactly does ‘reliable’ mean, and should the law ever be seen as a computer program?

Mike Ryder, Lancaster University


 

I will start off by saying I always like a good controversial article that goes against established conventions. As is obvious from the name already, that is what this article tries to do. However, I do not think it is succeeding and does not live up to its potential.

My problem with the article is not in what he claims, but how he supports it. I think, completely outside a moral or legal judgement, I agree with his two hypotheses he sets out in the start: 1) It is inevitable that human beings will build weapons systems capable of killing people on their own, without any human involvement or direction; and (2) It is conceivable that human beings could teach machines to recognize and distinguish when the use of lethal force complies with international humanitarian law.

However, he fails to actually provide arguments for these theses. The article is very short (7 pages, with a large part of these pages made up by sources), and 4 of these pages are descriptions of historical programmes. While there are many lessons to be learnt from historical military innovation, the most important lessons from history is that you cannot generalize from the past to predict the future with certainty. This is not support for a strong statement that it is “inevitable” that they would be developed. His argumentation that it will be conceivable to develop systems that could comply with IHL is supported by mentioning two military R&D programmes and that many technologists would answer “let us try.” Again, that is not any support for his argument that it would be conceivable, and does not provide any insight into the state of technology. Additionally, the small amount of different sources, and the quality of some of his sources, do not help. It is a shame he could not provide a solid backing for his statements, because I actually agree with it – and this is also what I have been working on myself. However, this article does not provide sufficient proof for that. Then I have not even started about the shoddy argumentation and generalizations in the last section, and the US-centrism.

He is not the only one generalizing about the development of autonomous weapon systems without taking nuances into account, as that is seen more often in the debate, unfortunately. However, the entire departing point of his article are these 2 hypotheses, so in this case a solid argument is actually needed. He ignores the established literature on what is needed for defence innovation. I would recommend the article “The Diffusion of Drone Warfare? Industrial, Organizational and Infrastructural Constraints” by Gilli and Gilli (2016) as a rebuttal of his arguments, but with more solid material to support their point of view.
Maaike Verbruggen, Stockholm International Peace Research Institute (SIPRI)


 

Harris’ article has 2 arguments: 1, humans will build autonomous weapon systems (AWS); 2, it is conceivable that AWS could comply with the law of armed conflict. I completely agree with him.

Firstly, I think AWS will be built, whether they are as independent as a Terminator who decides everything apart from mission parameters, or, as Harris suggests, an advanced drone that can initiate individual attacks when authorised. The fact is that less people want to join militaries, and we are on the verge of what could be very unpredictable and very dangerous times.  Add to that, the public being far more resistant to seeing troops die on foreign soil, and any country that feels a need to use force extraterritorially doesn’t have many options if they are going to maintain their place in the world. AWS could be the answers to a lot of problems, if the ethical issues of using them in the first place do not outweight their usefulness.

Second, I think the idea that legal rules cannot be converted to algorithms that machine could understand is ridiculous, Arkin already shows this is possible in his book Governing Lethal Autonomous Robots. The issue really goes beyond the rules. It is, frankly easy to programme a system with ‘IF civilian, THEN do not shoot’, for example. The difficulty is recognising what a civilian is. An international armed conflict, where the enemy wears an identifying uniform is clearly less problematic, an AWS that recognises the enemy uniform could fire. A non-international armed conflict between state and non-state actor is trickier – how to identify a militant or terrorist when they dress like civilians? There are suggestions in the literature of nanotechnology sensors identifying metallic footprints, but this doesn’t help AWS if in an area where civilians carry guns for status or personal protection. It seems, the only real identifying feature of enemies hiding amongst civilians is hostile intent. A robot detecting emotion is clearly difficult – but this is being worked on. Perhaps, waiting for hostile action would be better – If an AWS detects somebody firing at friendly forces, that person has self-identified as an enemy and a legitimate target, and an AWS firing at them would cause no legal issues regarding distinction.

Regarding proportionality, Schmitt and Thurner suggest that this could be turned into an algorithm by re-purposing collateral damage estimation technologies to give one value that could be weighed against military advantage which could be calculated by commanders assigning values to enemy objects and installations. In terms of precautions in attack, most of these obligations would, I think, fall on commanders, but perhaps a choice in munitions could be delegated to an AWS – for example, if a target is chosen in a street, an AWS could select a smaller munition, to avoid including civilians in the possible blast radius.

So, it is certainly not inconcievable that AWS could comply with the law of armed conflict. If fact, I think they probably could do. But massive increases in technology are likely to be required before this is possible.

Joshua Hughes, Lancaster University


As a complete novice to the debates on Autonomous Weapons Systems I enjoyed this article. However, I also completely agree with the criticisms that other group members have made about the article e.g. that some of the arguments are poorly supported. Nonetheless, as a short article that intends to provoke discussion I believe the article is successful and provides a good starting point for people like myself that are not so familiar with the topic.

Liam Halewood, Liverpool John Moores University


As always, if you’re interested in joining just check the Contact tab.

4 thoughts on “Autonomous Weapons and International Humanitarian Law OR Killer Robots are Here. Get Used to it – Harris

  1. I wonder if the idea of programmatising the nature of combat is in a way a reversion back to systems analysis and judging ‘success’ based on body counts. Will mission parameters then start to be defined by the tool, rather than the human objective?

    Like

    1. Fascinating idea, Mike. But I think every military probably learned that body counts do not equal victory from the Vietnam War. Perhaps, the number and frequency of enemy activities might be the barometer. I’m pretty sure the night raids by Joint Special Operations Command in the Iraq War were judged a success by bringing the number of car bombings in Baghdad down from approx 150 per month during 2004, to approx twice a month in 2008. Perhaps it’s reducing the number of unwanted activities that will be the test of success in future conflicts with terrorists or militant groups.

      Like

  2. I accept that lessons have (probably) been learnt from Vietnam, but I do I remain concerned that technology will define parameters rather than the other way round. If we accept for example that robots won’t be able to replace humans completely (due to their specialised nature), then it follows that robots will be better adapted to certain roles than others. If this is the case, then I wonder to what extent there is a risk that the technology itself will dictate the way war is waged.

    Another question: where does this leave us with regards to law vs ethics. Though the use of such robots may be more acceptable in international law do to better compliance, their use may also lead to more enemy deaths overall, especially if their use dictates strategy. Is such an approach therefore ‘ethical’, even if it is technically ‘legal’?

    Final (philosophical) question: will the use of killer robots ‘dehumanise’ us in the eyes of the enemy? How will this impact upon international diplomacy?

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s