In relation to autonomous weapon systems, how much human control is ‘meaningful’? 

This week we consider what level of human control over killer robots is meaningful. This has been a topic of great discussion at the UN as part of the deliberations about whether or not these systems should be banned. Indeed, Paul Scharre has just written an interesting blog on this very subject, see here. 

 

Here’s what we think: 

 

It’s great that this question should come up on TTAC21 as it’s something I’m particularly interested in at the moment. From my position, human control isn’t really very ‘meaningful’ and hasn’t been for a long time. If anything drone pilots don’t so much represent a lack of control so much as highlighting for us the lack of control, or lack of human agency, that’s been present in the military for a very long time. I mean even go so far back as the Second World War and already technology was starting to take over many of the duties of actually ‘waging war’. Skip on a few years and you get to the nuclear bomb, wherein one single individual ‘presses the button’, though in reality the decision to use the bomb was made many years before and by a great many people. At what point is the single decision to press the red button meaningful? I argue not at all, if the weapon exists alongside the common will to use it. If not pilot A pressing the button, then the military can simply send pilot B or pilot C. And while we’re at it, we better make sure it lands where we tell it to. Better get a machine to do the job… 

 

Mike Ryder, Lancaster University 

 

This question really is an important one. Despite studying international law, perhaps it is more important than the legal questions over AWS. I think the approach which Paul Scharre suggests, that if we had a technologically perfect autonomous weapon system what role would we still want humans to play is a great one. I think it is the question which will lead the international community towards whatever answer they come to in relation to meaningful human control. 

For me, I’m coming to the conclusion that unless an instance of combat is of a high intensity and military personnel from your own side or civilians are going to die without immediate action and the speed of decision-making that only an AWS will have, then it would always be preferable to have a human overseeing lethal decisions, if not actually making them. Whilst the legal arguments can be made convincingly for both no automation and full automation of lethal decision-making, I cautiously argue that where technology has the required capabilities then lethal decision-making by an AWS could be lawful. Ethically however, I would prefer a higher standard which would include humans in the decision-making process. But, ethically desirable is more than ‘meaningful’ and this is why I think Scharre has gotten the jump on the Campaign to Stop Killer Robots; reaching a ‘meaningful’ level of human involvement is a minimum threshold, but ethically desirable can go as high as anybody wants. Of course, this then makes it harder to discuss and so may tied up the CCW discussions for longer – although I hope it will be worth it. 

For me, ‘meaningful’ comes down to a human deciding that characteristics XYZ make an individual worthy of targeting. In an international armed conflict, that might be them wearing the uniform of an adversary. In a non-international armed conflict, it may be that they have acted in such a way to make them an adversary (I.e. directly participating in hostilities). But, that human decision can still be pre-determined and later executed by a machine. The temporal and physical distance does not alter the decision that XYZ characteristics mean that the potential target becomes a definitive target. Others will disagree with my conception of ‘meaningful’, and I hope it will generate discussion, but this is also why I favour Scharre’s method of moving forward. 

Joshua Hughes, Lancaster University 

Shaw – Robot Wars: US Empire and Geopolitics in the Robotic Age

Here’s our second article under discussion this month, Robot Wars: US Empire and Geopolitics in the Robotic Age by Ian Shaw. This work follows on from his great book Predator Empire, which is not only a well argued piece on the technology-based containment of the globe by the US, but also includes magnificent accounts of the history of target killing amongst other things.

 

 Here’s what we thought of his article: 


This reading group has been going for almost nine months now, and in that time it’s fair to say we’ve read a fair bit on drone warfare and autonomous weapons. From all of our reading thus far, I’m not sure that this article actually says anything specifically new about the field, or indeed offers any sort of radical insight. As is typical for a piece grounded (forgive the pun) in the Geographical and Earth Sciences, the paper is awash with ‘topographies’ and ‘spaces’, and yet all of this when drone warfare has been around for quite some time. And of course, let us not forget that battlefields are constantly shifting spaces, and this is not the first shift in the ‘landscape’ of warfare, as the invention of the tank, the aeroplane and the submarine have already gone to show. In this sense then, I’m not really sure how much this paper is adding to our understanding of drones, or drone warfare – nor indeed empire and geopolitics.  

The one thing I did find interesting however, in a non-TTAC21 specific context, was this notion of robots as ‘existential actors’ (455), and autonomy then as an ‘ontological condition’. Again, though I don’t think this is anything new per se, I find it interesting that now we are starting to see a shift in the language around drones, as other disciplines are slowly getting to grips with the impact of drones on our conception of space and the relationship between the human and the machine.  

Mike Ryder, Lancaster University 


I thought this article was interesting, and I liked to reconceptualization of various aspects of targeted killing, modern war, and robotic conflict into abstract geopolitical ideas. However, The part I found most interesting was Shaw’s use of Deleuze’s notion of the dividual, where life is signified by digital information, rather than something truly human. As Shaw himself notes, in signature strikes by remote-controlled drones, the targets are dividuals who simply fit a criteria of a terrorist pattern of life, for example. With future autonomous weapons, killing by criteria is likely to be the same, but a lethal decision-making algorithm is likely to determine all targets based on criteria, whether something simple like an individuals membership of an enemy armed forces, or working out if patterns of life qualify an individual as a terrorist. In this sense, no only do the targets become dividuals, as they are reduced to data points picked up by sensors, but also those deploying autonomous weapons become dividuals as their targeting criteria and therefore their political and military desires become algorithmic data also. It seems that one of the effects of using robotics is not only the de-humanising of potential targets, but also the de-humanising of potential users. 

Joshua Hughes, Lancaster University 


 

What do you think?

Should robots be allowed to target people? Based on combatant status?

Here is our second question this month on autonomous weapon systems. Due to space reasons in the title I did paraphrase it slightly. Here is the full question which went out to all network members:

If the technology within a lethal autonomous weapon system can comply with the law of armed conflict, should they be allowed to target people? Should they be able to target people based on their membership of a group, for example, membership of an enemy military, or a rebel group? 

Here’s what we thought:


This question poses a massive moral and ethical dilemma, and not just for autonomous weapon systems (AWS). Membership of any organisation, including notably, the State, has always been problematic, but in a ‘traditional’ military setting, we tend to work around this by drawing a clear distinction between those in uniform and those not. Of course this construct is undermined as soon as you introduce the partisan, or the non-uniformed fighter, and as we have seen in recent years, terrorist organisations seek to avoid marking their members completely. So there is the problem of identification to start with… But then things get more tricky when you come to question the terms of membership, or the consent given by any ‘member’ of an organisation to be a part of said organisation, and quite what that membership entails.  

Take citizenship for example: we don’t formally ‘sign up’, but we are assumed to be a part of said organisation (i.e. the State) so would be targets of the ‘group’ known as the State in the terms set by this question. Take this argument one step further and you could have say, ‘Members of the TTAC21 reading group’. On first glance, members of our reading group might be ‘legitimate’ targets, however each of our ‘members’ has different levels of consent and participation within the group. Some for example have come along to meetings in person, or have Skyped in for an hour or two. Meanwhile others have provided comment for the blog, while others are yet to contribute anything. Are each of these members ‘members’ of the same level? How and why can, or indeed should, we compare any one member to another? And let’s not forget the question of motivation. Some of us are members because we are actively working in the field, while some of us have different level of interest or motivation. Does that then mean that each of us should be tarred with the same brush and classified in the same way when it comes to targeting members of our specific group? 

This question is far more complex than it seems! 

Mike Ryder, Lancaster University 

 


This question really gets to the nub of why some people are concerned with autonomous weapon systems. If something is possible, should we do it? At the recent Group of Governmental Experts meeting on Lethal Autonomous Weapon Systems at the UN in November 2017, Paul Scharre put it something like this: If we could have a perfectly functioning autonomous weapon system in the future, where would we still want humans to make decisions? 

It seems that most people do want human control over lethal decision-making, although some are willing to delegate this to a machine if it were to become a military necessity. However, many are dead-set against any such delegation. I think a major aspect of this is trust. Are we willing to trust our lives to machines? Many people are already doing so in prototype and beta-testing self-driving cars, and in doing so are also putting the lives of nearby pedestrians in the ‘hands’ of these self-driving cars. For many, this is unnerving. Yet, we put our lives in the hands of fellow drivers every time we go out on the road. We all know this, but are all comfortable with this fact. Perhaps we will not be happy to delegate our transport to machines until we can trust them. I think if self-driving cars were shown to be functioning perfectly, people would begin to trust them. 

With lethal autonomous systems, the stakes are much higher.  A self-driving car may take the wrong turn, an autonomous weapon may take the wrong life. This is obviously a huge issue, that people may never become comfortable with. But, here we are hypothetically considering those which would function perfectly. I still think it will come down to whether people will trust a system to make the correct decision.  And yet, there are still issues around whether a machine could ever comprehend every possible situation it could be in. An often used example is an enemy soldier who has fallen asleep on guard duty. The law of armed conflict would allow combatants to kill this sleeping soldier simply for being a member of the enemy side. Yet, it is difficult for us to accept when there is the possibility of capture. Here, this would not be a legal requirement under the law of armed conflict, but may be a moral desire. If programming of autonomous weapons can go beyond the law to take ethical decisions into account as well, trust in the lethal decision-making capability of machines may grow resulting in society being ok with machines performing status-based targeting. 

Joshua Hughes, Lancaster University 


 

What do you think?

Leveringhaus – Autonomous weapons mini-series: Distance, weapons technology and humanity in armed conflict

This week we are considering Distance, weapons technology and humanity in armed conflict from the Autonomous Weapons mini-series over on the Humanitarian Law & Policy blog from the International Committee of the Red Cross. In it, the author discusses how distance can affect moral accountability, with particular focus on drones and autonomous weapons. Please take a look yourself, and let us know what you think in the comments below. 

 


This blog offers interesting insight into concepts of ‘distance’ in warfare. In it, the author distinguishes between geographical distance and psychological distance, and also then brings in concepts of causal and temporal distance to show the complex inter-relations between the various categories.  

One of the key questions raised in the article is: ‘how can one say that wars are fought as a contest between military powers if killing a large number of members of another State merely requires pushing a button?’ The implication here, to me at least (as I have also suggested in my comments in other blogs), is a need to reimagine or reconstruct the concept of ‘warfare’ in the public consciousness. We seem stuck currently in a position whereby memories of the two world wars linger, and the public conceive of war as being fought on designated battlefields with easily recognisable sides. 

While I agree with much of what the author says, where this article falls down I think is in the conclusion that ‘the cosmopolitan ideal of a shared humanity is good starting point for a wider ethical debate on distance, technology, and the future of armed conflict.’ While I agree with the author’s stance in principle, his argument relies on both sides in any given conflict sharing the same ethical framework. As we have seen already with suicide bombings and other acts of terrorism, this is no longer an ‘even’ battlefield – nor indeed is it a battle fought between two clearly delineated sides. While such disparities exist, I find it hard to believe any sort of balance can be struck. 

Mike Ryder, Lancaster University 

 


 

I found this piece, and its discussion of different types of distance both interesting and illuminating. I’ve spoken with a number of students recently about distance, and how that affects their feelings regarding their own decision-making, and the consequences of it. I found it really interesting that a large proportion of students were quite accepting of the idea that moral distance makes one feel less responsible for something that happens. But, many of the same students also wanted people held responsible for their actions regardless of that moral distance. So this gives us a strange situation where people who feel no responsibility should be held responsible. I don’t think this position is unusual. In fact, I think most people around the world would agree with this position, despite it being rather paradoxical. 

It is clear that from a moral perspective, an accountability gap could be created. But, as ethics and morals are flexible and subjective, one could also argue that there is no moral accountability gap. Fortunately, law is more concrete. We do have legal rules on responsibility. We’ve seen that a number of autonomous vehicle manufacturers are going to take responsibility for their vehicles in self-driving modes. However, it is yet to be seen if autonomous weapon system manufacturers will follow this lead. 

Joshua Hughes, Lancaster University 


 

Let us know what you think

Do previous instances of weapons regulation offer any useful concepts for governing lethal autonomous weapon systems?

Here is our first question on lethal autonomous weapon systems this month. If you have any thoughts about answers, let us know in the comments.


The question for me at least is whether or not we can draw parallels between regulation of the human and regulation of the machine. The problem here is that there are no clear and simple ways of holding a machine to account, so the question of responsibility and therefore regulation become problematic. We can hold a soldier to account for misusing a gun – we cannot do the same for a  machine. For one thing, they do not know, and cannot experience the concept of human death, so how can we even hold them to the same level of accountability when they cannot even understand the framework on which modern human ethics is built?   

Mike Ryder, Lancaster University 

 


Recently, I read Steven Pinker’s The Better Angels of our Nature. In it he considers why violence has declined over centuries. One part of it looks at weapons of mass destruction. For Pinker, the main reason chemical, biological and nuclear weapons are not used regularly is not because of international law concerns around high levels of collateral damage, but more because it would break a taboo on using them. Pinker suggests that the taboo is so powerful that using weapons of mass destruction are not even in the minds of military planners when considering war plans. Autonomous weapons have the potential to be as impactful as weapons of mass destruction, but without the horrendous collateral damage concerns. Would this create an equal taboo based on the human unease at delegating lethal decision-making? I think a taboo would be created, but the likely reducing in collateral damage would make any taboo weaker. Therefore taboo is unlikely to restrict any future use of autonomous weapons. 

In terms of treaty-based regulation, having been at the meetings of experts on lethal autonomous weapon systems at the UN, I think any meaningful ban on these weapons is unlikely. However, in recent years a number of informal expert manuals have been created on air and missile warfare, naval warfare, and cyber warfare. They have generally been well received, and their recommendations followed. I could imagine a situation in the future where similar ‘road rules’ are developed for autonomous weapons, interpreting the requirements of the law of armed conflict and international human rights law for such systems. This could result in more detailed regulation, as there is less watering down of provisions by states who want to score political points rather than progress talks. We will have to wait and see if this will happen. 

Joshua Hughes, Lancaster University 


 

Let us know what you think

Haas and Fisher – The evolution of targeted killing practices: Autonomous weapons, future conflict, and the international order

This week we begin our discussions of autonomous weapon systems. Following on from the discussions of the Group of Governmental Experts at the UN last November, more talks are taking place in February and April this year. For those not aware, an autonomous weapon system is that which can select and engage targets without human intervention – think a drone with the brain of The Terminator.

First, we are looking at ‘The evolution of targeted killing practices: Autonomous weapons, future conflict, and the international order’ by Michael Carl Haas and Sophie-Charlotte Fischer from Contemporary Security Policy, 38:2 (2017), 281–306. Feel free to check the article out and let us know what you think in the comments below.

Here’s what we thought:

 


I enjoyed this article, and the ways in which it seeks to engage with the future applications of AWS in what we might describe as ‘conventional’ wars with the use of targeted killings or ‘assassinations’ by drone likely to become more common.

From my own research perspective I am particularly interested in the authors’ approach to autonomy and autonomous thinking in machines (see 284 onwards). I agree with the authors that ‘the concept of “autonomy” remains poorly understood’ (285), but suggest that perhaps here the academic community has become too caught up in machinic autonomy. If we can’t first understand human autonomy, how can we hope to apply a human framework to our understanding of machines? This question to me, seems to be one that has been under-represented in academic thinking in this area, and is one I may well have to write a paper on!

Finally, I’d like to briefly mention the question of human vs machinic command and control. I was interested to see that the authors suggest AWS might not become ubiquitous in ‘conventional’ conflicts when we consider the advantages and disadvantages of their use for military commanders (297). To me, there is a question here of at what point does machinic intelligence or machine-thinking ‘trump’ the human? Certainly our technology as it stands to date still puts the human as superior in many types of thinking, yet I can’t believe that it will be too long before computers start to totally outsmart humans such that this will even remain a question.  There is also then the question of ‘training cost’. In a drawn out conflict, what will be easier and cheaper to produce: a robot fighter who will be already pre-programmed with training and so on, or the human soldier who requires an investment of time and resources, and who may never quite take on his or her ‘programming’ to the same level as the machine. Something to think about certainly…

Mike Ryder, Lancaster University


 

I quite liked this piece, as it is common to hear fellow researchers of autonomous weapons say that such systems will change warfare but then provide no discussion of how this will happen. Fortunately, this paper does just that. I particularly liked the idea that use of autonomous systems for ‘decapitation’ strikes against senior military, political, or terrorist leaders/influencers could not only reduce collateral damage overall, and the number of friendly deaths, but also the level of destruction a conflict could have in general. Indeed, I’ve heard a number of people suggest that present-day drones offer a chance at ‘perfect’ distinction, in that they are so precise that the person aimed at is almost always the person who dies with often little collateral damage. It is usually poor intelligence analysis that results in the wrong person being targeted in the first place that is responsible for the unfortunately high number of civilian deaths in the ‘drone wars’. Use of AI could rectify this, but also the use of autonomous weapons could reduce the need for substantial intelligence analysis if they were one day capable of identifying combatant status of ordinary fighters, and of identifying specific high-level personalities through facial or iris recognition. If this becomes possible, autonomous weapons could have the strategic impact of a nuclear bomb against the enemy fighters, without causing much collateral damage.

Joshua Hughes, Lancaster University


 

Let us know what you think in the comments below

Autonomy in Future Military and Security Technologies: Implications for Law, Peace, and Conflict

Three members of our group, along with other colleagues, took part in an international workshop at the Universitat de Barcelona in February 2017 titled ‘Sense and Scope of Autonomy in Emerging Military and Security Technologies’. Coming out of this, a compendium of research papers has been put together in order offer a contribution to discussions at the Group of Governmental Experts meeting on Lethal Autonomous Weapon Systems at the United Nations Office at Geneva 13th-17th November 2017.

This compendium of articles is due to be published by the Richardson Institute at Lancaster University, UK. Due to technical reasons, the report is provisionally being hosted here in order that delegates at the GGE, and those interested in the subject of lethal autonomous weapon systems, may read the works whilst discussions in Geneva are taking place.

The compendium contains:

Formal presentation of the compendium

Milton Meza-Rivas, Faculty of Law at the University of Barcelona, Spain.

Some Insights on Artificial Intelligence Autonomy in Military Technologies

Prof. Dr Maite Lopez-Sanchez, Coordinator, Interuniversity Master in Artificial Intelligence, University of Barcelona, Spain

Software Tools for the Cognitive Development of Autonomous Robots

Dr. Pablo Jiménez Schlegl, Institute of Robotics & Industrial Informatics, Spanish National Research Council, Polytechnic University of Catalonia, Spain

What is Autonomy in Weapon Systems, and How Do We Analyse it? – An International Law Perspective

Joshua Hughes, University of Lancaster Law School and the Richardson Institute, Lancaster University, UK

Legal Personhood and Autonomous Weapons

Dr Migle Laukyte, Department of Private Law, University Carlos III of Madrid.

A Note on the Sense and Scope of ‘Autonomy’ in Emerging Military Weapon Systems and Some Remarks on the Terminator Dilemma

Maziar Homayounnejad, Dickson Poon School of Law, King’s College London, UK

 

The compendium is available here: Richardson Institute – Autonomy in Future Military and Security Technologies Implications for Law, Peace, and Conflict

A courtesy translation of the introduction which presents the articles is available here (in Spanish): Translation of the compendium presentation text in Spanish

Autonomous Weapons and International Humanitarian Law OR Killer Robots are Here. Get Used to it – Harris

Here, we discuss ‘Autonomous Weapons and International Humanitarian Law OR Killer Robots are Here. Get Used to it’ by Shane Harris, Temple International and Comparative Law Journal, 2016, Vol.30(1), pp.77-83.

It’s available here.

Essentially, Harris argues two things:

(1) It is inevitable that human beings will build weapons systems capable of killing people on their own, without any human involvement or direction; and (2) It is conceivable that human beings could teach machines to recognize and distinguish when the use of lethal force complies with international humanitarian law.

We al dig into it in our own individual ways, and we have a few different views on this subject. So, hopefully we will start a lively debate.

If you do have any comment, please leave them at the bottom.

Without further a do, here’s Mike:


This article is very much a summation of ‘where we are at’ when it comes to autonomous weapon systems, and the author places killer robots as an inevitability (77), and one that we should perhaps embrace as machines are far more reliable than humans at snap decision making (83).

However I do fundamentally disagree with the notion that robots could (and indeed should) be taught international law, and so kill only when legal to do so. The issue here is one of interpretation and the article would seem to fail to take into account the fact that most modern-day enemies do not mark themselves distinctly as combatants, as their unknowability is the primary advantage that they are able to exercise against a vastly superior military threat. The distinction here is never so clear-cut.

There is also, in my mind, the issue of reciprocity and the expectations associated with combat. Here, war seems to be defined in strictly Western terms, where there is a law of war as such, agreed upon by both sides. But again, terrorists don’t adhere to this structure. With no attributability, there is no stopping a terrorist dressed as a civilian carrying out an atrocity, and no way a robot could interpret that ‘civilian’ as a terrorist within the structures of a strict legal framework. While I do not dispute that robots can theoretically be made more ‘reliable’ than humans, the question for me is what exactly does ‘reliable’ mean, and should the law ever be seen as a computer program?

Mike Ryder, Lancaster University


 

I will start off by saying I always like a good controversial article that goes against established conventions. As is obvious from the name already, that is what this article tries to do. However, I do not think it is succeeding and does not live up to its potential.

My problem with the article is not in what he claims, but how he supports it. I think, completely outside a moral or legal judgement, I agree with his two hypotheses he sets out in the start: 1) It is inevitable that human beings will build weapons systems capable of killing people on their own, without any human involvement or direction; and (2) It is conceivable that human beings could teach machines to recognize and distinguish when the use of lethal force complies with international humanitarian law.

However, he fails to actually provide arguments for these theses. The article is very short (7 pages, with a large part of these pages made up by sources), and 4 of these pages are descriptions of historical programmes. While there are many lessons to be learnt from historical military innovation, the most important lessons from history is that you cannot generalize from the past to predict the future with certainty. This is not support for a strong statement that it is “inevitable” that they would be developed. His argumentation that it will be conceivable to develop systems that could comply with IHL is supported by mentioning two military R&D programmes and that many technologists would answer “let us try.” Again, that is not any support for his argument that it would be conceivable, and does not provide any insight into the state of technology. Additionally, the small amount of different sources, and the quality of some of his sources, do not help. It is a shame he could not provide a solid backing for his statements, because I actually agree with it – and this is also what I have been working on myself. However, this article does not provide sufficient proof for that. Then I have not even started about the shoddy argumentation and generalizations in the last section, and the US-centrism.

He is not the only one generalizing about the development of autonomous weapon systems without taking nuances into account, as that is seen more often in the debate, unfortunately. However, the entire departing point of his article are these 2 hypotheses, so in this case a solid argument is actually needed. He ignores the established literature on what is needed for defence innovation. I would recommend the article “The Diffusion of Drone Warfare? Industrial, Organizational and Infrastructural Constraints” by Gilli and Gilli (2016) as a rebuttal of his arguments, but with more solid material to support their point of view.
Maaike Verbruggen, Stockholm International Peace Research Institute (SIPRI)


 

Harris’ article has 2 arguments: 1, humans will build autonomous weapon systems (AWS); 2, it is conceivable that AWS could comply with the law of armed conflict. I completely agree with him.

Firstly, I think AWS will be built, whether they are as independent as a Terminator who decides everything apart from mission parameters, or, as Harris suggests, an advanced drone that can initiate individual attacks when authorised. The fact is that less people want to join militaries, and we are on the verge of what could be very unpredictable and very dangerous times.  Add to that, the public being far more resistant to seeing troops die on foreign soil, and any country that feels a need to use force extraterritorially doesn’t have many options if they are going to maintain their place in the world. AWS could be the answers to a lot of problems, if the ethical issues of using them in the first place do not outweight their usefulness.

Second, I think the idea that legal rules cannot be converted to algorithms that machine could understand is ridiculous, Arkin already shows this is possible in his book Governing Lethal Autonomous Robots. The issue really goes beyond the rules. It is, frankly easy to programme a system with ‘IF civilian, THEN do not shoot’, for example. The difficulty is recognising what a civilian is. An international armed conflict, where the enemy wears an identifying uniform is clearly less problematic, an AWS that recognises the enemy uniform could fire. A non-international armed conflict between state and non-state actor is trickier – how to identify a militant or terrorist when they dress like civilians? There are suggestions in the literature of nanotechnology sensors identifying metallic footprints, but this doesn’t help AWS if in an area where civilians carry guns for status or personal protection. It seems, the only real identifying feature of enemies hiding amongst civilians is hostile intent. A robot detecting emotion is clearly difficult – but this is being worked on. Perhaps, waiting for hostile action would be better – If an AWS detects somebody firing at friendly forces, that person has self-identified as an enemy and a legitimate target, and an AWS firing at them would cause no legal issues regarding distinction.

Regarding proportionality, Schmitt and Thurner suggest that this could be turned into an algorithm by re-purposing collateral damage estimation technologies to give one value that could be weighed against military advantage which could be calculated by commanders assigning values to enemy objects and installations. In terms of precautions in attack, most of these obligations would, I think, fall on commanders, but perhaps a choice in munitions could be delegated to an AWS – for example, if a target is chosen in a street, an AWS could select a smaller munition, to avoid including civilians in the possible blast radius.

So, it is certainly not inconcievable that AWS could comply with the law of armed conflict. If fact, I think they probably could do. But massive increases in technology are likely to be required before this is possible.

Joshua Hughes, Lancaster University


As a complete novice to the debates on Autonomous Weapons Systems I enjoyed this article. However, I also completely agree with the criticisms that other group members have made about the article e.g. that some of the arguments are poorly supported. Nonetheless, as a short article that intends to provoke discussion I believe the article is successful and provides a good starting point for people like myself that are not so familiar with the topic.

Liam Halewood, Liverpool John Moores University


As always, if you’re interested in joining just check the Contact tab.