Should robots be allowed to target people? Based on combatant status?

Here is our second question this month on autonomous weapon systems. Due to space reasons in the title I did paraphrase it slightly. Here is the full question which went out to all network members:

If the technology within a lethal autonomous weapon system can comply with the law of armed conflict, should they be allowed to target people? Should they be able to target people based on their membership of a group, for example, membership of an enemy military, or a rebel group? 

Here’s what we thought:


This question poses a massive moral and ethical dilemma, and not just for autonomous weapon systems (AWS). Membership of any organisation, including notably, the State, has always been problematic, but in a ‘traditional’ military setting, we tend to work around this by drawing a clear distinction between those in uniform and those not. Of course this construct is undermined as soon as you introduce the partisan, or the non-uniformed fighter, and as we have seen in recent years, terrorist organisations seek to avoid marking their members completely. So there is the problem of identification to start with… But then things get more tricky when you come to question the terms of membership, or the consent given by any ‘member’ of an organisation to be a part of said organisation, and quite what that membership entails.

Take citizenship for example: we don’t formally ‘sign up’, but we are assumed to be a part of said organisation (i.e. the State) so would be targets of the ‘group’ known as the State in the terms set by this question. Take this argument one step further and you could have say, ‘Members of the TTAC21 reading group’. On first glance, members of our reading group might be ‘legitimate’ targets, however each of our ‘members’ has different levels of consent and participation within the group. Some for example have come along to meetings in person, or have Skyped in for an hour or two. Meanwhile others have provided comment for the blog, while others are yet to contribute anything. Are each of these members ‘members’ of the same level? How and why can, or indeed should, we compare any one member to another? And let’s not forget the question of motivation. Some of us are members because we are actively working in the field, while some of us have different level of interest or motivation. Does that then mean that each of us should be tarred with the same brush and classified in the same way when it comes to targeting members of our specific group?

This question is far more complex than it seems!

Mike Ryder, Lancaster University

 


This question really gets to the nub of why some people are concerned with autonomous weapon systems. If something is possible, should we do it? At the recent Group of Governmental Experts meeting on Lethal Autonomous Weapon Systems at the UN in November 2017, Paul Scharre put it something like this: If we could have a perfectly functioning autonomous weapon system in the future, where would we still want humans to make decisions?

It seems that most people do want human control over lethal decision-making, although some are willing to delegate this to a machine if it were to become a military necessity. However, many are dead-set against any such delegation. I think a major aspect of this is trust. Are we willing to trust our lives to machines? Many people are already doing so in prototype and beta-testing self-driving cars, and in doing so are also putting the lives of nearby pedestrians in the ‘hands’ of these self-driving cars. For many, this is unnerving. Yet, we put our lives in the hands of fellow drivers every time we go out on the road. We all know this, but are all comfortable with this fact. Perhaps we will not be happy to delegate our transport to machines until we can trust them. I think if self-driving cars were shown to be functioning perfectly, people would begin to trust them.

With lethal autonomous systems, the stakes are much higher.  A self-driving car may take the wrong turn, an autonomous weapon may take the wrong life. This is obviously a huge issue, that people may never become comfortable with. But, here we are hypothetically considering those which would function perfectly. I still think it will come down to whether people will trust a system to make the correct decision.  And yet, there are still issues around whether a machine could ever comprehend every possible situation it could be in. An often used example is an enemy soldier who has fallen asleep on guard duty. The law of armed conflict would allow combatants to kill this sleeping soldier simply for being a member of the enemy side. Yet, it is difficult for us to accept when there is the possibility of capture. Here, this would not be a legal requirement under the law of armed conflict, but may be a moral desire. If programming of autonomous weapons can go beyond the law to take ethical decisions into account as well, trust in the lethal decision-making capability of machines may grow resulting in society being ok with machines performing status-based targeting.

Joshua Hughes, Lancaster University


 

UPDATE: This entry added 04/03/2019

As Mike has said, the issue here boils down to how we would define ‘membership’, and the way it would be determined in the field. An autonomous weapon system would require some form of machine learning in order to delineate between valid and non-valid targets based on the evidence it can gather in each case. Machine learning can either be supervised, where categories are provided and the algorithm attempts to determine which one best covers a given item, or unsupervised, where the algorithm groups items based on whichever characteristics it finds best distinguishes them, and the categories emerge dynamically from this process of classification. Both methods are fraught with peril when applied to social media advertising, let along the application of lethal force.

Take a supervised training regime, where the AWS would be provided with a list of criteria that would authorise the use of force, such as a list of proscribed organisations and their uniforms to compare against, or a dataset of enemy combatants’ faces to perform facial recognition on. The applications of lethal force would be only as good as the intel, and the experience of US no-fly lists shows just much faith one should have in that. If the model is insufficiently precise (e.g. ‘apply lethal force if target is holding a weapon’), then all of a sudden a child with a toy gun is treated as an attacking Jihadi, much to the consternation of its former parents. In an effort to avoid these false-positives, one may be tempted to go too far the other way, handicapping the rapid analytical and decision-making powers that are often cited as an advantage of AWSes with over-restrictive classifiers. If a potential threat emerges that does not fit into any preordained model, such as a non-uniformed combatant, it will be ignored—a false-negative.

An unsupervised training regime would just as dangerous, if not more so. As Shaw points out in his discussion of ‘dividuals’, this would represent a sea change in legal norms governing force. Not only would decisions be made based solely on the aggregate behaviour of a target, without oversight or appreciation of wider context, but we would be offloading a moral responsibility to display the reasoning behind such actions to opaque algorithms. Unsupervised training is also prone to misclassification—consider the work of Samim Winiger—and intentional manipulation—as in the case of the Microsoft AI who was reduced to a Holocaust-denying Trump supporter within a day of being released onto Twitter. Perhaps in the future, we can all look forward to a new Prevent strategy aimed at countering the growing threat of AI radicalisation.

Ben Goldsworthy, Lancaster University


What do you think?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s