In relation to autonomous weapon systems, how much human control is ‘meaningful’? 

This week we consider what level of human control over killer robots is meaningful. This has been a topic of great discussion at the UN as part of the deliberations about whether or not these systems should be banned. Indeed, Paul Scharre has just written an interesting blog on this very subject, see here. 

 

Here’s what we think: 

 

It’s great that this question should come up on TTAC21 as it’s something I’m particularly interested in at the moment. From my position, human control isn’t really very ‘meaningful’ and hasn’t been for a long time. If anything drone pilots don’t so much represent a lack of control so much as highlighting for us the lack of control, or lack of human agency, that’s been present in the military for a very long time. I mean even go so far back as the Second World War and already technology was starting to take over many of the duties of actually ‘waging war’. Skip on a few years and you get to the nuclear bomb, wherein one single individual ‘presses the button’, though in reality the decision to use the bomb was made many years before and by a great many people. At what point is the single decision to press the red button meaningful? I argue not at all, if the weapon exists alongside the common will to use it. If not pilot A pressing the button, then the military can simply send pilot B or pilot C. And while we’re at it, we better make sure it lands where we tell it to. Better get a machine to do the job… 

 

Mike Ryder, Lancaster University 

 

This question really is an important one. Despite studying international law, perhaps it is more important than the legal questions over AWS. I think the approach which Paul Scharre suggests, that if we had a technologically perfect autonomous weapon system what role would we still want humans to play is a great one. I think it is the question which will lead the international community towards whatever answer they come to in relation to meaningful human control. 

For me, I’m coming to the conclusion that unless an instance of combat is of a high intensity and military personnel from your own side or civilians are going to die without immediate action and the speed of decision-making that only an AWS will have, then it would always be preferable to have a human overseeing lethal decisions, if not actually making them. Whilst the legal arguments can be made convincingly for both no automation and full automation of lethal decision-making, I cautiously argue that where technology has the required capabilities then lethal decision-making by an AWS could be lawful. Ethically however, I would prefer a higher standard which would include humans in the decision-making process. But, ethically desirable is more than ‘meaningful’ and this is why I think Scharre has gotten the jump on the Campaign to Stop Killer Robots; reaching a ‘meaningful’ level of human involvement is a minimum threshold, but ethically desirable can go as high as anybody wants. Of course, this then makes it harder to discuss and so may tied up the CCW discussions for longer – although I hope it will be worth it. 

For me, ‘meaningful’ comes down to a human deciding that characteristics XYZ make an individual worthy of targeting. In an international armed conflict, that might be them wearing the uniform of an adversary. In a non-international armed conflict, it may be that they have acted in such a way to make them an adversary (I.e. directly participating in hostilities). But, that human decision can still be pre-determined and later executed by a machine. The temporal and physical distance does not alter the decision that XYZ characteristics mean that the potential target becomes a definitive target. Others will disagree with my conception of ‘meaningful’, and I hope it will generate discussion, but this is also why I favour Scharre’s method of moving forward. 

Joshua Hughes, Lancaster University 

Shaw – Robot Wars: US Empire and Geopolitics in the Robotic Age

Here’s our second article under discussion this month, Robot Wars: US Empire and Geopolitics in the Robotic Age by Ian Shaw. This work follows on from his great book Predator Empire, which is not only a well argued piece on the technology-based containment of the globe by the US, but also includes magnificent accounts of the history of target killing amongst other things.

 

Here’s what we thought of his article:


This reading group has been going for almost nine months now, and in that time it’s fair to say we’ve read a fair bit on drone warfare and autonomous weapons. From all of our reading thus far, I’m not sure that this article actually says anything specifically new about the field, or indeed offers any sort of radical insight. As is typical for a piece grounded (forgive the pun) in the Geographical and Earth Sciences, the paper is awash with ‘topographies’ and ‘spaces’, and yet all of this when drone warfare has been around for quite some time. And of course, let us not forget that battlefields are constantly shifting spaces, and this is not the first shift in the ‘landscape’ of warfare, as the invention of the tank, the aeroplane and the submarine have already gone to show. In this sense then, I’m not really sure how much this paper is adding to our understanding of drones, or drone warfare – nor indeed empire and geopolitics.

The one thing I did find interesting however, in a non-TTAC21 specific context, was this notion of robots as ‘existential actors’ (455), and autonomy then as an ‘ontological condition’. Again, though I don’t think this is anything new per se, I find it interesting that now we are starting to see a shift in the language around drones, as other disciplines are slowly getting to grips with the impact of drones on our conception of space and the relationship between the human and the machine.

Mike Ryder, Lancaster University


I thought this article was interesting, and I liked to reconceptualization of various aspects of targeted killing, modern war, and robotic conflict into abstract geopolitical ideas. However, The part I found most interesting was Shaw’s use of Deleuze’s notion of the dividual, where life is signified by digital information, rather than something truly human. As Shaw himself notes, in signature strikes by remote-controlled drones, the targets are dividuals who simply fit a criteria of a terrorist pattern of life, for example. With future autonomous weapons, killing by criteria is likely to be the same, but a lethal decision-making algorithm is likely to determine all targets based on criteria, whether something simple like an individuals membership of an enemy armed forces, or working out if patterns of life qualify an individual as a terrorist. In this sense, no only do the targets become dividuals, as they are reduced to data points picked up by sensors, but also those deploying autonomous weapons become dividuals as their targeting criteria and therefore their political and military desires become algorithmic data also. It seems that one of the effects of using robotics is not only the de-humanising of potential targets, but also the de-humanising of potential users.

Joshua Hughes, Lancaster University


UPDATE: added 11th March 2019, written earlier.

I second Mike’s criticisms—the author uses a tremendous amount of verbiage to ultimately say very little. Buried beneath all the talk of human-machine teaming ‘actualiz[ing] a set of virtual potentials and polic[ing] the ontopolitical composition of worlds’ and ‘aleatory circulations of the warscape’ are three predictions about a potential future world order. First, the author suggests that swarms of autonomous military drones will make ‘mass once again…a decisive factor on the battlefield’. Secondly, they describe the co-option of the US’ global network of military bases into a planetary robotic military presence called ‘Roboworld’, which aims ‘to eradicate the tyranny of distance by contracting the surfaces of the planet under the watchful eyes of US robots’. Finally, the employment of AWS will fundamentally change the nature of the battle space as, ‘[r]ather than being directed to targets deemed a priori dangerous by humans, robots will be (co-)producers of state security and non-state terror’, issuing in an ‘age of deterritorialized, agile, and intelligent machines’.

Josh has already mentioned about the idea of people being targeted on dividual bases, but I found the above mention of ‘deterritorisalisation’, along with the phrase ‘temporary autonomous zone of slaughter’ particularly interesting, owing to the latter phrase’s anarchist pedigree. The author’s comments about the ‘ontological condition’ of robots notwithstanding, AWSes are unlikely to be considered citizens of their respective nations any time soon. As they fight one another at those nations’ behest, but without any personal stake in the outcomes, we see a form of conflict that is perhaps fundamentally not as new as it is often made out to be, but rather a modern re-incarnation of the mercenary armies of the past or, even, of some sort of gladiatorial combat.

Ben Goldsworthy, Lancaster University


What do you think?

Should robots be allowed to target people? Based on combatant status?

Here is our second question this month on autonomous weapon systems. Due to space reasons in the title I did paraphrase it slightly. Here is the full question which went out to all network members:

If the technology within a lethal autonomous weapon system can comply with the law of armed conflict, should they be allowed to target people? Should they be able to target people based on their membership of a group, for example, membership of an enemy military, or a rebel group? 

Here’s what we thought:


This question poses a massive moral and ethical dilemma, and not just for autonomous weapon systems (AWS). Membership of any organisation, including notably, the State, has always been problematic, but in a ‘traditional’ military setting, we tend to work around this by drawing a clear distinction between those in uniform and those not. Of course this construct is undermined as soon as you introduce the partisan, or the non-uniformed fighter, and as we have seen in recent years, terrorist organisations seek to avoid marking their members completely. So there is the problem of identification to start with… But then things get more tricky when you come to question the terms of membership, or the consent given by any ‘member’ of an organisation to be a part of said organisation, and quite what that membership entails.

Take citizenship for example: we don’t formally ‘sign up’, but we are assumed to be a part of said organisation (i.e. the State) so would be targets of the ‘group’ known as the State in the terms set by this question. Take this argument one step further and you could have say, ‘Members of the TTAC21 reading group’. On first glance, members of our reading group might be ‘legitimate’ targets, however each of our ‘members’ has different levels of consent and participation within the group. Some for example have come along to meetings in person, or have Skyped in for an hour or two. Meanwhile others have provided comment for the blog, while others are yet to contribute anything. Are each of these members ‘members’ of the same level? How and why can, or indeed should, we compare any one member to another? And let’s not forget the question of motivation. Some of us are members because we are actively working in the field, while some of us have different level of interest or motivation. Does that then mean that each of us should be tarred with the same brush and classified in the same way when it comes to targeting members of our specific group?

This question is far more complex than it seems!

Mike Ryder, Lancaster University

 


This question really gets to the nub of why some people are concerned with autonomous weapon systems. If something is possible, should we do it? At the recent Group of Governmental Experts meeting on Lethal Autonomous Weapon Systems at the UN in November 2017, Paul Scharre put it something like this: If we could have a perfectly functioning autonomous weapon system in the future, where would we still want humans to make decisions?

It seems that most people do want human control over lethal decision-making, although some are willing to delegate this to a machine if it were to become a military necessity. However, many are dead-set against any such delegation. I think a major aspect of this is trust. Are we willing to trust our lives to machines? Many people are already doing so in prototype and beta-testing self-driving cars, and in doing so are also putting the lives of nearby pedestrians in the ‘hands’ of these self-driving cars. For many, this is unnerving. Yet, we put our lives in the hands of fellow drivers every time we go out on the road. We all know this, but are all comfortable with this fact. Perhaps we will not be happy to delegate our transport to machines until we can trust them. I think if self-driving cars were shown to be functioning perfectly, people would begin to trust them.

With lethal autonomous systems, the stakes are much higher.  A self-driving car may take the wrong turn, an autonomous weapon may take the wrong life. This is obviously a huge issue, that people may never become comfortable with. But, here we are hypothetically considering those which would function perfectly. I still think it will come down to whether people will trust a system to make the correct decision.  And yet, there are still issues around whether a machine could ever comprehend every possible situation it could be in. An often used example is an enemy soldier who has fallen asleep on guard duty. The law of armed conflict would allow combatants to kill this sleeping soldier simply for being a member of the enemy side. Yet, it is difficult for us to accept when there is the possibility of capture. Here, this would not be a legal requirement under the law of armed conflict, but may be a moral desire. If programming of autonomous weapons can go beyond the law to take ethical decisions into account as well, trust in the lethal decision-making capability of machines may grow resulting in society being ok with machines performing status-based targeting.

Joshua Hughes, Lancaster University


 

UPDATE: This entry added 04/03/2019

As Mike has said, the issue here boils down to how we would define ‘membership’, and the way it would be determined in the field. An autonomous weapon system would require some form of machine learning in order to delineate between valid and non-valid targets based on the evidence it can gather in each case. Machine learning can either be supervised, where categories are provided and the algorithm attempts to determine which one best covers a given item, or unsupervised, where the algorithm groups items based on whichever characteristics it finds best distinguishes them, and the categories emerge dynamically from this process of classification. Both methods are fraught with peril when applied to social media advertising, let along the application of lethal force.

Take a supervised training regime, where the AWS would be provided with a list of criteria that would authorise the use of force, such as a list of proscribed organisations and their uniforms to compare against, or a dataset of enemy combatants’ faces to perform facial recognition on. The applications of lethal force would be only as good as the intel, and the experience of US no-fly lists shows just much faith one should have in that. If the model is insufficiently precise (e.g. ‘apply lethal force if target is holding a weapon’), then all of a sudden a child with a toy gun is treated as an attacking Jihadi, much to the consternation of its former parents. In an effort to avoid these false-positives, one may be tempted to go too far the other way, handicapping the rapid analytical and decision-making powers that are often cited as an advantage of AWSes with over-restrictive classifiers. If a potential threat emerges that does not fit into any preordained model, such as a non-uniformed combatant, it will be ignored—a false-negative.

An unsupervised training regime would just as dangerous, if not more so. As Shaw points out in his discussion of ‘dividuals’, this would represent a sea change in legal norms governing force. Not only would decisions be made based solely on the aggregate behaviour of a target, without oversight or appreciation of wider context, but we would be offloading a moral responsibility to display the reasoning behind such actions to opaque algorithms. Unsupervised training is also prone to misclassification—consider the work of Samim Winiger—and intentional manipulation—as in the case of the Microsoft AI who was reduced to a Holocaust-denying Trump supporter within a day of being released onto Twitter. Perhaps in the future, we can all look forward to a new Prevent strategy aimed at countering the growing threat of AI radicalisation.

Ben Goldsworthy, Lancaster University


What do you think?

Leveringhaus – Autonomous weapons mini-series: Distance, weapons technology and humanity in armed conflict

This week we are considering Distance, weapons technology and humanity in armed conflict from the Autonomous Weapons mini-series over on the Humanitarian Law & Policy blog from the International Committee of the Red Cross. In it, the author discusses how distance can affect moral accountability, with particular focus on drones and autonomous weapons. Please take a look yourself, and let us know what you think in the comments below.

 


This blog offers interesting insight into concepts of ‘distance’ in warfare. In it, the author distinguishes between geographical distance and psychological distance, and also then brings in concepts of causal and temporal distance to show the complex inter-relations between the various categories.

One of the key questions raised in the article is: ‘how can one say that wars are fought as a contest between military powers if killing a large number of members of another State merely requires pushing a button?’ The implication here, to me at least (as I have also suggested in my comments in other blogs), is a need to reimagine or reconstruct the concept of ‘warfare’ in the public consciousness. We seem stuck currently in a position whereby memories of the two world wars linger, and the public conceive of war as being fought on designated battlefields with easily recognisable sides.

While I agree with much of what the author says, where this article falls down I think is in the conclusion that ‘the cosmopolitan ideal of a shared humanity is good starting point for a wider ethical debate on distance, technology, and the future of armed conflict.’ While I agree with the author’s stance in principle, his argument relies on both sides in any given conflict sharing the same ethical framework. As we have seen already with suicide bombings and other acts of terrorism, this is no longer an ‘even’ battlefield – nor indeed is it a battle fought between two clearly delineated sides. While such disparities exist, I find it hard to believe any sort of balance can be struck.

Mike Ryder, Lancaster University

 


 

I found this piece, and its discussion of different types of distance both interesting and illuminating. I’ve spoken with a number of students recently about distance, and how that affects their feelings regarding their own decision-making, and the consequences of it. I found it really interesting that a large proportion of students were quite accepting of the idea that moral distance makes one feel less responsible for something that happens. But, many of the same students also wanted people held responsible for their actions regardless of that moral distance. So this gives us a strange situation where people who feel no responsibility should be held responsible. I don’t think this position is unusual. In fact, I think most people around the world would agree with this position, despite it being rather paradoxical.

It is clear that from a moral perspective, an accountability gap could be created. But, as ethics and morals are flexible and subjective, one could also argue that there is no moral accountability gap. Fortunately, law is more concrete. We do have legal rules on responsibility. We’ve seen that a number of autonomous vehicle manufacturers are going to take responsibility for their vehicles in self-driving modes. However, it is yet to be seen if autonomous weapon system manufacturers will follow this lead.

Joshua Hughes, Lancaster University


Update added 25/02/2019, written earlier

This short article explores the impact of the introduction of autonomous weapon systems on the bases of distance, be that geographical, psychological, causal or temporal distance. Contemporary drone warfare is given as an example of a the way in which a new technology allows war to be conducted with an increased geographical distance, but that the incidence of PTSD amongst such pilots shows that the same is not true of the psychological distance. Leveringhaus focuses on the issues posed by the increase of causal distance in assigning blame for breaches of international humanitarian law. We are unlikely see drones in the dock at the Hague any time soon, but who will be brought before the courts in the event of an AWS-committed war crime? The programmer of the software? This poses a challenge to the entire ethical framework of respect for individual rights, part of which is the promise ‘to hold those who violate these rights responsible for their deeds.’

Ben Goldsworthy, Lancaster University


Let us know what you think

Do previous instances of weapons regulation offer any useful concepts for governing lethal autonomous weapon systems?

Here is our first question on lethal autonomous weapon systems this month. If you have any thoughts about answers, let us know in the comments.


The question for me at least is whether or not we can draw parallels between regulation of the human and regulation of the machine. The problem here is that there are no clear and simple ways of holding a machine to account, so the question of responsibility and therefore regulation become problematic. We can hold a soldier to account for misusing a gun – we cannot do the same for a  machine. For one thing, they do not know, and cannot experience the concept of human death, so how can we even hold them to the same level of accountability when they cannot even understand the framework on which modern human ethics is built?   

Mike Ryder, Lancaster University 

 


Recently, I read Steven Pinker’s The Better Angels of our Nature. In it he considers why violence has declined over centuries. One part of it looks at weapons of mass destruction. For Pinker, the main reason chemical, biological and nuclear weapons are not used regularly is not because of international law concerns around high levels of collateral damage, but more because it would break a taboo on using them. Pinker suggests that the taboo is so powerful that using weapons of mass destruction are not even in the minds of military planners when considering war plans. Autonomous weapons have the potential to be as impactful as weapons of mass destruction, but without the horrendous collateral damage concerns. Would this create an equal taboo based on the human unease at delegating lethal decision-making? I think a taboo would be created, but the likely reducing in collateral damage would make any taboo weaker. Therefore taboo is unlikely to restrict any future use of autonomous weapons. 

In terms of treaty-based regulation, having been at the meetings of experts on lethal autonomous weapon systems at the UN, I think any meaningful ban on these weapons is unlikely. However, in recent years a number of informal expert manuals have been created on air and missile warfare, naval warfare, and cyber warfare. They have generally been well received, and their recommendations followed. I could imagine a situation in the future where similar ‘road rules’ are developed for autonomous weapons, interpreting the requirements of the law of armed conflict and international human rights law for such systems. This could result in more detailed regulation, as there is less watering down of provisions by states who want to score political points rather than progress talks. We will have to wait and see if this will happen. 

Joshua Hughes, Lancaster University 


 

Let us know what you think

Haas and Fisher – The evolution of targeted killing practices: Autonomous weapons, future conflict, and the international order

This week we begin our discussions of autonomous weapon systems. Following on from the discussions of the Group of Governmental Experts at the UN last November, more talks are taking place in February and April this year. For those not aware, an autonomous weapon system is that which can select and engage targets without human intervention – think a drone with the brain of The Terminator.

First, we are looking at ‘The evolution of targeted killing practices: Autonomous weapons, future conflict, and the international order’ by Michael Carl Haas and Sophie-Charlotte Fischer from Contemporary Security Policy, 38:2 (2017), 281–306. Feel free to check the article out and let us know what you think in the comments below.

Here’s what we thought:

 


I enjoyed this article, and the ways in which it seeks to engage with the future applications of AWS in what we might describe as ‘conventional’ wars with the use of targeted killings or ‘assassinations’ by drone likely to become more common.

From my own research perspective I am particularly interested in the authors’ approach to autonomy and autonomous thinking in machines (see 284 onwards). I agree with the authors that ‘the concept of “autonomy” remains poorly understood’ (285), but suggest that perhaps here the academic community has become too caught up in machinic autonomy. If we can’t first understand human autonomy, how can we hope to apply a human framework to our understanding of machines? This question to me, seems to be one that has been under-represented in academic thinking in this area, and is one I may well have to write a paper on!

Finally, I’d like to briefly mention the question of human vs machinic command and control. I was interested to see that the authors suggest AWS might not become ubiquitous in ‘conventional’ conflicts when we consider the advantages and disadvantages of their use for military commanders (297). To me, there is a question here of at what point does machinic intelligence or machine-thinking ‘trump’ the human? Certainly our technology as it stands to date still puts the human as superior in many types of thinking, yet I can’t believe that it will be too long before computers start to totally outsmart humans such that this will even remain a question.  There is also then the question of ‘training cost’. In a drawn out conflict, what will be easier and cheaper to produce: a robot fighter who will be already pre-programmed with training and so on, or the human soldier who requires an investment of time and resources, and who may never quite take on his or her ‘programming’ to the same level as the machine. Something to think about certainly…

Mike Ryder, Lancaster University


 

I quite liked this piece, as it is common to hear fellow researchers of autonomous weapons say that such systems will change warfare but then provide no discussion of how this will happen. Fortunately, this paper does just that. I particularly liked the idea that use of autonomous systems for ‘decapitation’ strikes against senior military, political, or terrorist leaders/influencers could not only reduce collateral damage overall, and the number of friendly deaths, but also the level of destruction a conflict could have in general. Indeed, I’ve heard a number of people suggest that present-day drones offer a chance at ‘perfect’ distinction, in that they are so precise that the person aimed at is almost always the person who dies with often little collateral damage. It is usually poor intelligence analysis that results in the wrong person being targeted in the first place that is responsible for the unfortunately high number of civilian deaths in the ‘drone wars’. Use of AI could rectify this, but also the use of autonomous weapons could reduce the need for substantial intelligence analysis if they were one day capable of identifying combatant status of ordinary fighters, and of identifying specific high-level personalities through facial or iris recognition. If this becomes possible, autonomous weapons could have the strategic impact of a nuclear bomb against the enemy fighters, without causing much collateral damage.

Joshua Hughes, Lancaster University


UPDATE: added 18th March 2019, written earlier

This article presents predictions on the impact of autonomous weapons on the future of conflict. Building on a ‘functional view’ of autonomy that distinguishes degrees of autonomy across different functional areas, such as ‘health management’, ‘battlefield intelligence’ and ‘the use of force’, the authors discuss the issues and incentives of applying different degrees to different functions. They also detail the US’ ongoing drone campaigns before extrapolating the trends seen within into a future of greater weapon autonomy. First, they see an increased focus on ‘leadership targeting’, believing that ‘autonomous weapons would be a preferred means of executing counter-leadership strikes, including targeted killings.’ Secondly, they propose such tactics as a necessary response to the resurgence of ‘hybrid warfare’, with ‘[a]ttacking leadership targets in-theatre…be[ing] perceived as a viable and effective alternative to an expansion of the conflict into the heartland of an aggressive state opponent’. The authors conclude with their belief that ‘advanced Western military forces’ “command philosophies” will militate against the employment of autonomous weapons, which require surrendering human control, in some types of targeted killing scenarios.

I found the article to have a rather unexpected utopian takeaway. Where a previous author proposed that a shift to swarm warfare would make ‘mass once again…a decisive factor on the battlefield’, this paper’s predict the development of a more scapel-like approach of targeted leadership killings. The thought of generals and politicians being make immediately responsible for their military adventures, rather than however many other citizens (and auxiliaries) they can place between them and their enemies, seems like a rather egalitarian development of statecraft. It reminded me, of all things, of the scene in Fahrenheit 9/11 in which the director asks pro-war congressmen to enlist their own children in the Army and is met with refusal. It’s easier to command others to fight and die on you and your government’s behalf, but the advent of the nuclear age presented the first time in which the generals had just as much ‘skin in the game’ as everyone else, and nukes remain unused. Perhaps this future of leadership targetting by tiny drones can achieve the same result, but without taking the rest of us along for the apocalyptic ride. The risk of a small quadcopter loaded with explosives flying through one’s office window seems like it would be a strong incentive for peacemaking, a potentially welcome by-product of the reduction of the ‘tyranny of distance’ (or, rather, the obviation of insulation) that the earlier author had discussed.

Ben Goldsworthy, Lancaster University


Let us know what you think in the comments below

Autonomy in Future Military and Security Technologies: Implications for Law, Peace, and Conflict

Three members of our group, along with other colleagues, took part in an international workshop at the Universitat de Barcelona in February 2017 titled ‘Sense and Scope of Autonomy in Emerging Military and Security Technologies’. Coming out of this, a compendium of research papers has been put together in order offer a contribution to discussions at the Group of Governmental Experts meeting on Lethal Autonomous Weapon Systems at the United Nations Office at Geneva 13th-17th November 2017.

This compendium of articles is due to be published by the Richardson Institute at Lancaster University, UK. Due to technical reasons, the report is provisionally being hosted here in order that delegates at the GGE, and those interested in the subject of lethal autonomous weapon systems, may read the works whilst discussions in Geneva are taking place.

The compendium contains:

Formal presentation of the compendium

Milton Meza-Rivas, Faculty of Law at the University of Barcelona, Spain.

Some Insights on Artificial Intelligence Autonomy in Military Technologies

Prof. Dr Maite Lopez-Sanchez, Coordinator, Interuniversity Master in Artificial Intelligence, University of Barcelona, Spain

Software Tools for the Cognitive Development of Autonomous Robots

Dr. Pablo Jiménez Schlegl, Institute of Robotics & Industrial Informatics, Spanish National Research Council, Polytechnic University of Catalonia, Spain

What is Autonomy in Weapon Systems, and How Do We Analyse it? – An International Law Perspective

Joshua Hughes, University of Lancaster Law School and the Richardson Institute, Lancaster University, UK

Legal Personhood and Autonomous Weapons

Dr Migle Laukyte, Department of Private Law, University Carlos III of Madrid.

A Note on the Sense and Scope of ‘Autonomy’ in Emerging Military Weapon Systems and Some Remarks on the Terminator Dilemma

Maziar Homayounnejad, Dickson Poon School of Law, King’s College London, UK

 

The compendium is available here: Richardson Institute – Autonomy in Future Military and Security Technologies Implications for Law, Peace, and Conflict

A courtesy translation of the introduction which presents the articles is available here (in Spanish): Translation of the compendium presentation text in Spanish