What do you expect to be the most important trend in the future of technology, terrorism, and/or armed conflict in the rest of this century?

Here, we come to our final week of full discussions. After more than 2 years of work on the Technology, Terrorism, and Armed Conflict in the 21st Century research network, it is coming to an end in terms of regular material. It’s been immensely interesting to read, watch, think about, and discuss a variety of topics and issues that we’re all interested in. But, the time to move on has come. Beyond technology, terrorism, and armed conflict there are more concepts to discuss and ideas to dream up. So, that’s what Mike and I (Josh) are going to do. We’ve started a podcast as a vehicle for us to discuss wider things. You’re more than welcome to come along for the ride. We’ll publicise it soon. 

The TTAC21 website will remain in place for at least another year. A few members who have joined later on have already sent in comments for previous posts. I’ll update the original posts with them in the next few weeks. 

If you have something new to say on the issues covered by the network, you are more than welcome to write a blog to be published here. Or if you want to comment on any of the pieces we’ve previously looked at, either pop your ideas in the comments box at the bottom or e-mail them over. 

The network itself will now become a mailing list for sharing interesting pieces, calls for applications/papers, and that sort of thing. We’ve amassed a really great group of people in TTAC21, so it would seem a waste to not keep in touch. If you aren’t on the mailing list but would like to be, just send an email and we will add you to it. 

Before we look at our answers to the question, I thought I would explain the pictures in this post. The featured image at the top of the page is one of Jean-Marc Côté’s pieces from his ‘En L’An 2000’ (In the year 2000) body of work. This was a series of postcards drawn for the 1900 World Exhibition in Paris. They show visions of 100 years into the future. fittingly for us, several relate to military operations and I’ll put a few in this post. If you want to see them all, they’re available here.

France_in_XXI_Century._Military_cyclists

Now, for the final time, let’s see what we thought … 


France_in_XXI_Century._War_cars

TTAC21 has been going now for over 18 months, and in that time, its given us some fascinating discussion points. I for one have certainly benefitted from being a part of the network. To respond to this, the final ‘official’ question put the group, I’m not sure I can come up with a single answer. Since this research group came into being, we’ve already seen examples of drones causing disruption in a civilian setting, and this sort of thing will only become more prevalent over the coming years. How the rise of drones will affect the military setting however is another matter entirely. From my own perspective, I imagine the major western powers will continue investing over the odds in overly-expensive, overly-complex systems such as Reaper / Predator, while smaller players will start to make use of the disruptive power of drones to take on major powers at their own game. As Josh and I have said many times before, it can’t be long before people start strapping explosives to the sorts of drones that can be bought in shops. This will pose a massive problem for law enforcement agencies, and for military powers, as enemies and criminals will both have access to the sort of powers that for a short while were solely the preserve of the major players. Fighting drone crime and drone terrorism will certainly prove a major challenge in the years to come. 

But while drones are certainly one of the most important trends, I can’t help but think cybercrime will also continue to prove a problem – in particular in relation to electric / self-driving cars. If hackers can already break into certain cars via their stereo systems and advanced on board electronics, we can only imagine what might happen when self-driving cars become ubiquitous. Fair dodging and going ‘off grid’ will be the least of authorities’ problems as criminals may be able to kidnap individuals remotely, or even commit crimes of murder or mass murder without even having to enter the vehicle they intend to use as a weapon. And that’s just the tip of a very big cybercrime-related iceberg! 

Mike Ryder, Lancaster University 


France_in_XXI_Century._War_plane

Perhaps I’ve been thinking about Paul Virilio a bit too much recently, but I think the biggest trend in the future of technology, terrorism, and armed conflict will be speed. Faster computing speeds allow for more computation and more complex computation to take place, leading to technological advances. Lots of people talk of artificial intelligence as being the future, yet AI is just computer programmes performing tasks that require human-level cognition. But as Alan Turing spoke about (in the 1930’s!) is that any universal computer can compute anything. So, what allows AI to be realised is the emergence of computing speed necessary to make these computer programmes work in an acceptable time. 

In terrorism, we see a battle between terrorist plotters trying to hide their activities and security services trying to investigate these activities. A terrorist can be as discrete as possible, but they will almost inevitably leave some clues. Thus, it becomes a race for security services to find these clues and stop a plot before the terrorists can put their plan into motion. Therefore, the faster a terrorist can move, the less chance they have of being caught. 

When it comes to warfare, the most significant trend we’ve seen in recent years is the revival of hybrid or non-linear warfare by Russia. Often, this involves changes in tactics or strategy to overwhelm the enemy in unexpected ways. For example, an adversary prepared for a typical military-on-military confrontation would be dealt severe blows if a force could melt into the civilian population only to pop up and carry out major attacks at irregular intervals. The sooner one force can adopt vastly different tactics to outwit their enemy, the more advantage they can gain. 

It’s also possible to conceptualise hybrid warfare as entailing temporary allegiances against common enemies. As such, the sooner allegiances can be made, then more force can be applied more quickly than otherwise. Plus, once those allegiances have run their course, the sooner one party can betray the other, the more advantage they can gain over them. Thus, speed is also a key concept in late modern warfare – and that is all before we even really look at the ever-increasing operational tempo of modern combat! 

In conclusion then, speed seems to be the basis of all major trends happening at the moment. I expect it to continue into the future. As it is the underlying trend, perhaps speed would be better conceptualised as a ‘meta-trend’? 

Joshua Hughes, Lancaster University  


So, that’s what we thought, and that’s it. What do you think? 

Mehmann  and Li – Ethical, Legal, Social, and Policy Issues in the Use of Genomic Technology by the U.S. Military

This week we look at the use of DNA technologies in an enthralling article by Maxwell J. Mehmann and Tracy Yeheng Li, ‘Ethical, Legal, Social, and Policy Issues in the Use of Genomic Technology by the U.S. Military’ (Case Western Reserve Journal of International Law 47, no. 1 (2015): 115–65.), available here.

Here’s what we thought. Let you know what you think in the comments below, or send us a message to join to network.


This long but incredibly interesting paper explores many of the bioethical issues associated with the use of genetic and genomic science by the US military. Such is the scope of the paper that there are almost too many points to discuss in a short blog, so for this reason I’d therefore like to focus on the question of genomic enhancement (pp. 161–164). While I am sure many people can agree that genomic enhancement has great potential to improve the effectiveness of warfighters, I wonder what the implications will be for soldiers once their term of service comes to an end? The author doesn’t address this question, and it remains for me perhaps the biggest ‘elephant in the room’ when we come to consider bio-technology and the military. While I agree there are certainly distinctions to be made between the civilian and military paradigms when it comes to ethics and responsibility, we should not forget that the two worlds are of course interlinked. What this means on a practical level is that any civilian can potentially become an enlisted member of the military, and of course any member of the military is always already a member of the civilian world as well.

My concern here is that by introducing bio-enhancements to the military (which we must assume will slowly filter through to the civilian world) we will in effect be creating a new category of the human, entrenching difference within human society. Indeed, we should ask, are these ‘enhanced’ soldiers even human at all? This question becomes even more significant when we consider the author’s claim that the most powerful enhancements may well need to be engineered at the embryonic stage, thus leading to the possibility that we will ‘lab grow’ our future soldiers. If they are lab grown and effectively enlisted from birth, what happens when their term of service ends? Does it ever end? Or will they rather be put down, like a dangerous dog, when they no longer demonstrate value for the military machine?

Mike Ryder, Lancaster University


This article was absolutely fascinating. However, it made me think of things far closer to home than the US military. For a while, I have been considering having my DNA sequenced as a shortcut to find out how I will react to different physical fitness training programmes (and in the vain hope that it will reveal I’ve got the genetic talent to be a world-beating talent at an obscure sport that I’ve never tried!). At least one of the companies offering this also look at corporate wellbeing, allowing employees to volunteer to have their DNA sequenced in order for their employer to be able to optimise their staff’s efficacy and work plans. What this article made me think of is why not use DNA sequencing to optimise military personnel? We know that all people have different skills and aptitudes, so why not inform commanders through genetics about which of their subordinates will be best for different tasks? Of course, this does not incorporate the impact that the environment has upon the individuals, so it is not foolproof. But, if DNA sequencing can help troops train and perform better, then it is surely beneficial to military effectiveness. However, it is currently expensive. Perhaps when prices drop it will be worth it for militaries to test all their personnel. At the very least it will be less problematic than enabling troops to use performance-enhancing drugs.

Joshua Hughes, Lancaster University

Krieg and Rickli – Surrogate warfare: the art of war in the 21st century?

This week we are looking at the topic of Surrogate Warfare in an article by Andreas Krieg and Jean-Marc Rickli. The article is available here. The piece covers ideas of surrogacy in warfare thorugh all sorts of interesting means, from mercenaries and militias to drones and satellites. We hope you enjoy the article. Let us know what you think in the comments.


In this article, the authors note the modern tendency towards ‘surrogate warfare’, in which States externalise the burden of war in order to distance themselves from the violence exercised by their surrogates (5). While the authors argue that surrogate warfare is ‘probably not the panacea for fighting wars in the twenty-first century’ (15), they do concede that surrogate warfare is going to become more common as risks and conflicts are not likely to recede any time soon (15).

I found this article interesting, though somewhat lacking in analysis, and I was left wondering how much of it is really ‘new’. Furthermore, I struggle to find the actual argument put forward by the authors who focus primarily on explaining what surrogate warfare is, and why it’s so prevalent. They don’t propose any solutions, nor even any remedies or genuine responses – or even make a sufficiently strong case as to why surrogate warfare might be a bad thing. Surrogate warfare may not be the panacea, but then the world is a very different place to it was in the time of Carl von Clausewitz.

Mike Ryder, Lancaster University


I thought this article was a little misplaced, in that whilst it was really interesting it did not seem to fit well as an academic journal article. As it gives a very thorough overview of states using surrogates in their acts of war, it seemed that this would be a better fit for a textbook chapter. I struggled to find anything that felt truly ‘new’ in this article, it felt as though a history lesson on state use of mercenaries and militias was being put together with some thoughts on modern warfare technologies and PMC’s and given a gloss of conceptual paint under the term ‘surrogate warfare’. I’m sure this would be really interesting to scholars of security and war studies who want a new perspective spin linking current conceptions of PMC’s to historical views of mercenaries, but it didn’t really chime with me in any way. That said, if I were teaching on mercenaries and PMC’s, I would definitely recommend this to my students as a primer document full of great information.

Joshua Hughes, Lancaster University


UPDATE: Added 22/04/2019

In this paper, the authors argue that the Westphalian era of nation state sovereignty is over, and the motif of 21st-century war is the practice by governments and other groups of ‘surrogate warfare’ as a means of distancing themselves from their employment of force around the world, whilst still allowing them to do so in order to achieve their geopolitical aims. The authors use ‘surrogate warfare’ as an umbrella term for ‘all forms of externalization of the burden of war to supplentary as well as substitutionary forces and platforms’, including (but not limited to) the Cold War staple of the ‘proxy war’.

Surrogate warfare is not new. ‘Since Ancient times, empires and states have entrusted auxiliaries, substitutes and proxies, at least partially, with the execution of military functions on their behalf.’ Arguably, the history stretches even further back – the God of the Old Testament, despite his omnipotence, utilised the Israelites to achieve his geopolitical aims of clearing the Promised Land. It may well be that the Westphalian period may have been but a historical blip, although the paper’s authors argue that there are some elements of our contemporary surrogate wars are unique. They are uniquely ‘globalized, privatized, securitized and mediatized’.

The author’s conclusions are well-argued. Though the line that ‘surrogate warfare is a return to… the cabinet wars of the medieval and early modern ages’ reminded me of a previous paper’s talk of using royal marriage to ensure peace and makes me wonder if some political scientists are looking a little too fixedly backwards, the four elements proposed as unique to 21st-century war are all certainly present, although how unique they are is less certain. For example, one could argue that the ability to control the success or failure of operations through the successful manipulation of the media was perfected with Hearst and the Spanish-American War of 1898, and what we see now is a difference of degree rather than kind. Most interesting is the ‘securitised’ aspect, as authors write that ‘threats have given way to risks as the drivers of security policies in the “global North”’.

The reality of surrogate war can be best shown with a recent example. President Trump made waves with the surprise announcement of the impending withdrawal of US troops from Syria. However, this amounts to only around 2,000 soldiers. Remaining in Syria will be the 60-75,000-strong Syrian Democratic Forces, primarily the Kurdish forces who were instrumental in turning the tide against IS. Also presumably remaining will be some 5,500 US contractors, of whom almost 3,000 are US citizens. On the one hand, Trump has ordered the withdrawal of US troops and declared the war against IS over. On the other, he’s only moving some 2% of the US’ overall force, including its surrogates, out of theatre.

Ben Goldsworthy, Lancaster University


Let us know what you think.

Shaw – Robot Wars: US Empire and Geopolitics in the Robotic Age

Here’s our second article under discussion this month, Robot Wars: US Empire and Geopolitics in the Robotic Age by Ian Shaw. This work follows on from his great book Predator Empire, which is not only a well argued piece on the technology-based containment of the globe by the US, but also includes magnificent accounts of the history of target killing amongst other things.

 

Here’s what we thought of his article:


This reading group has been going for almost nine months now, and in that time it’s fair to say we’ve read a fair bit on drone warfare and autonomous weapons. From all of our reading thus far, I’m not sure that this article actually says anything specifically new about the field, or indeed offers any sort of radical insight. As is typical for a piece grounded (forgive the pun) in the Geographical and Earth Sciences, the paper is awash with ‘topographies’ and ‘spaces’, and yet all of this when drone warfare has been around for quite some time. And of course, let us not forget that battlefields are constantly shifting spaces, and this is not the first shift in the ‘landscape’ of warfare, as the invention of the tank, the aeroplane and the submarine have already gone to show. In this sense then, I’m not really sure how much this paper is adding to our understanding of drones, or drone warfare – nor indeed empire and geopolitics.

The one thing I did find interesting however, in a non-TTAC21 specific context, was this notion of robots as ‘existential actors’ (455), and autonomy then as an ‘ontological condition’. Again, though I don’t think this is anything new per se, I find it interesting that now we are starting to see a shift in the language around drones, as other disciplines are slowly getting to grips with the impact of drones on our conception of space and the relationship between the human and the machine.

Mike Ryder, Lancaster University


I thought this article was interesting, and I liked to reconceptualization of various aspects of targeted killing, modern war, and robotic conflict into abstract geopolitical ideas. However, The part I found most interesting was Shaw’s use of Deleuze’s notion of the dividual, where life is signified by digital information, rather than something truly human. As Shaw himself notes, in signature strikes by remote-controlled drones, the targets are dividuals who simply fit a criteria of a terrorist pattern of life, for example. With future autonomous weapons, killing by criteria is likely to be the same, but a lethal decision-making algorithm is likely to determine all targets based on criteria, whether something simple like an individuals membership of an enemy armed forces, or working out if patterns of life qualify an individual as a terrorist. In this sense, no only do the targets become dividuals, as they are reduced to data points picked up by sensors, but also those deploying autonomous weapons become dividuals as their targeting criteria and therefore their political and military desires become algorithmic data also. It seems that one of the effects of using robotics is not only the de-humanising of potential targets, but also the de-humanising of potential users.

Joshua Hughes, Lancaster University


UPDATE: added 11th March 2019, written earlier.

I second Mike’s criticisms—the author uses a tremendous amount of verbiage to ultimately say very little. Buried beneath all the talk of human-machine teaming ‘actualiz[ing] a set of virtual potentials and polic[ing] the ontopolitical composition of worlds’ and ‘aleatory circulations of the warscape’ are three predictions about a potential future world order. First, the author suggests that swarms of autonomous military drones will make ‘mass once again…a decisive factor on the battlefield’. Secondly, they describe the co-option of the US’ global network of military bases into a planetary robotic military presence called ‘Roboworld’, which aims ‘to eradicate the tyranny of distance by contracting the surfaces of the planet under the watchful eyes of US robots’. Finally, the employment of AWS will fundamentally change the nature of the battle space as, ‘[r]ather than being directed to targets deemed a priori dangerous by humans, robots will be (co-)producers of state security and non-state terror’, issuing in an ‘age of deterritorialized, agile, and intelligent machines’.

Josh has already mentioned about the idea of people being targeted on dividual bases, but I found the above mention of ‘deterritorisalisation’, along with the phrase ‘temporary autonomous zone of slaughter’ particularly interesting, owing to the latter phrase’s anarchist pedigree. The author’s comments about the ‘ontological condition’ of robots notwithstanding, AWSes are unlikely to be considered citizens of their respective nations any time soon. As they fight one another at those nations’ behest, but without any personal stake in the outcomes, we see a form of conflict that is perhaps fundamentally not as new as it is often made out to be, but rather a modern re-incarnation of the mercenary armies of the past or, even, of some sort of gladiatorial combat.

Ben Goldsworthy, Lancaster University


What do you think?

Should robots be allowed to target people? Based on combatant status?

Here is our second question this month on autonomous weapon systems. Due to space reasons in the title I did paraphrase it slightly. Here is the full question which went out to all network members:

If the technology within a lethal autonomous weapon system can comply with the law of armed conflict, should they be allowed to target people? Should they be able to target people based on their membership of a group, for example, membership of an enemy military, or a rebel group? 

Here’s what we thought:


This question poses a massive moral and ethical dilemma, and not just for autonomous weapon systems (AWS). Membership of any organisation, including notably, the State, has always been problematic, but in a ‘traditional’ military setting, we tend to work around this by drawing a clear distinction between those in uniform and those not. Of course this construct is undermined as soon as you introduce the partisan, or the non-uniformed fighter, and as we have seen in recent years, terrorist organisations seek to avoid marking their members completely. So there is the problem of identification to start with… But then things get more tricky when you come to question the terms of membership, or the consent given by any ‘member’ of an organisation to be a part of said organisation, and quite what that membership entails.

Take citizenship for example: we don’t formally ‘sign up’, but we are assumed to be a part of said organisation (i.e. the State) so would be targets of the ‘group’ known as the State in the terms set by this question. Take this argument one step further and you could have say, ‘Members of the TTAC21 reading group’. On first glance, members of our reading group might be ‘legitimate’ targets, however each of our ‘members’ has different levels of consent and participation within the group. Some for example have come along to meetings in person, or have Skyped in for an hour or two. Meanwhile others have provided comment for the blog, while others are yet to contribute anything. Are each of these members ‘members’ of the same level? How and why can, or indeed should, we compare any one member to another? And let’s not forget the question of motivation. Some of us are members because we are actively working in the field, while some of us have different level of interest or motivation. Does that then mean that each of us should be tarred with the same brush and classified in the same way when it comes to targeting members of our specific group?

This question is far more complex than it seems!

Mike Ryder, Lancaster University

 


This question really gets to the nub of why some people are concerned with autonomous weapon systems. If something is possible, should we do it? At the recent Group of Governmental Experts meeting on Lethal Autonomous Weapon Systems at the UN in November 2017, Paul Scharre put it something like this: If we could have a perfectly functioning autonomous weapon system in the future, where would we still want humans to make decisions?

It seems that most people do want human control over lethal decision-making, although some are willing to delegate this to a machine if it were to become a military necessity. However, many are dead-set against any such delegation. I think a major aspect of this is trust. Are we willing to trust our lives to machines? Many people are already doing so in prototype and beta-testing self-driving cars, and in doing so are also putting the lives of nearby pedestrians in the ‘hands’ of these self-driving cars. For many, this is unnerving. Yet, we put our lives in the hands of fellow drivers every time we go out on the road. We all know this, but are all comfortable with this fact. Perhaps we will not be happy to delegate our transport to machines until we can trust them. I think if self-driving cars were shown to be functioning perfectly, people would begin to trust them.

With lethal autonomous systems, the stakes are much higher.  A self-driving car may take the wrong turn, an autonomous weapon may take the wrong life. This is obviously a huge issue, that people may never become comfortable with. But, here we are hypothetically considering those which would function perfectly. I still think it will come down to whether people will trust a system to make the correct decision.  And yet, there are still issues around whether a machine could ever comprehend every possible situation it could be in. An often used example is an enemy soldier who has fallen asleep on guard duty. The law of armed conflict would allow combatants to kill this sleeping soldier simply for being a member of the enemy side. Yet, it is difficult for us to accept when there is the possibility of capture. Here, this would not be a legal requirement under the law of armed conflict, but may be a moral desire. If programming of autonomous weapons can go beyond the law to take ethical decisions into account as well, trust in the lethal decision-making capability of machines may grow resulting in society being ok with machines performing status-based targeting.

Joshua Hughes, Lancaster University


 

UPDATE: This entry added 04/03/2019

As Mike has said, the issue here boils down to how we would define ‘membership’, and the way it would be determined in the field. An autonomous weapon system would require some form of machine learning in order to delineate between valid and non-valid targets based on the evidence it can gather in each case. Machine learning can either be supervised, where categories are provided and the algorithm attempts to determine which one best covers a given item, or unsupervised, where the algorithm groups items based on whichever characteristics it finds best distinguishes them, and the categories emerge dynamically from this process of classification. Both methods are fraught with peril when applied to social media advertising, let along the application of lethal force.

Take a supervised training regime, where the AWS would be provided with a list of criteria that would authorise the use of force, such as a list of proscribed organisations and their uniforms to compare against, or a dataset of enemy combatants’ faces to perform facial recognition on. The applications of lethal force would be only as good as the intel, and the experience of US no-fly lists shows just much faith one should have in that. If the model is insufficiently precise (e.g. ‘apply lethal force if target is holding a weapon’), then all of a sudden a child with a toy gun is treated as an attacking Jihadi, much to the consternation of its former parents. In an effort to avoid these false-positives, one may be tempted to go too far the other way, handicapping the rapid analytical and decision-making powers that are often cited as an advantage of AWSes with over-restrictive classifiers. If a potential threat emerges that does not fit into any preordained model, such as a non-uniformed combatant, it will be ignored—a false-negative.

An unsupervised training regime would just as dangerous, if not more so. As Shaw points out in his discussion of ‘dividuals’, this would represent a sea change in legal norms governing force. Not only would decisions be made based solely on the aggregate behaviour of a target, without oversight or appreciation of wider context, but we would be offloading a moral responsibility to display the reasoning behind such actions to opaque algorithms. Unsupervised training is also prone to misclassification—consider the work of Samim Winiger—and intentional manipulation—as in the case of the Microsoft AI who was reduced to a Holocaust-denying Trump supporter within a day of being released onto Twitter. Perhaps in the future, we can all look forward to a new Prevent strategy aimed at countering the growing threat of AI radicalisation.

Ben Goldsworthy, Lancaster University


What do you think?

Leveringhaus – Autonomous weapons mini-series: Distance, weapons technology and humanity in armed conflict

This week we are considering Distance, weapons technology and humanity in armed conflict from the Autonomous Weapons mini-series over on the Humanitarian Law & Policy blog from the International Committee of the Red Cross. In it, the author discusses how distance can affect moral accountability, with particular focus on drones and autonomous weapons. Please take a look yourself, and let us know what you think in the comments below.

 


This blog offers interesting insight into concepts of ‘distance’ in warfare. In it, the author distinguishes between geographical distance and psychological distance, and also then brings in concepts of causal and temporal distance to show the complex inter-relations between the various categories.

One of the key questions raised in the article is: ‘how can one say that wars are fought as a contest between military powers if killing a large number of members of another State merely requires pushing a button?’ The implication here, to me at least (as I have also suggested in my comments in other blogs), is a need to reimagine or reconstruct the concept of ‘warfare’ in the public consciousness. We seem stuck currently in a position whereby memories of the two world wars linger, and the public conceive of war as being fought on designated battlefields with easily recognisable sides.

While I agree with much of what the author says, where this article falls down I think is in the conclusion that ‘the cosmopolitan ideal of a shared humanity is good starting point for a wider ethical debate on distance, technology, and the future of armed conflict.’ While I agree with the author’s stance in principle, his argument relies on both sides in any given conflict sharing the same ethical framework. As we have seen already with suicide bombings and other acts of terrorism, this is no longer an ‘even’ battlefield – nor indeed is it a battle fought between two clearly delineated sides. While such disparities exist, I find it hard to believe any sort of balance can be struck.

Mike Ryder, Lancaster University

 


 

I found this piece, and its discussion of different types of distance both interesting and illuminating. I’ve spoken with a number of students recently about distance, and how that affects their feelings regarding their own decision-making, and the consequences of it. I found it really interesting that a large proportion of students were quite accepting of the idea that moral distance makes one feel less responsible for something that happens. But, many of the same students also wanted people held responsible for their actions regardless of that moral distance. So this gives us a strange situation where people who feel no responsibility should be held responsible. I don’t think this position is unusual. In fact, I think most people around the world would agree with this position, despite it being rather paradoxical.

It is clear that from a moral perspective, an accountability gap could be created. But, as ethics and morals are flexible and subjective, one could also argue that there is no moral accountability gap. Fortunately, law is more concrete. We do have legal rules on responsibility. We’ve seen that a number of autonomous vehicle manufacturers are going to take responsibility for their vehicles in self-driving modes. However, it is yet to be seen if autonomous weapon system manufacturers will follow this lead.

Joshua Hughes, Lancaster University


Update added 25/02/2019, written earlier

This short article explores the impact of the introduction of autonomous weapon systems on the bases of distance, be that geographical, psychological, causal or temporal distance. Contemporary drone warfare is given as an example of a the way in which a new technology allows war to be conducted with an increased geographical distance, but that the incidence of PTSD amongst such pilots shows that the same is not true of the psychological distance. Leveringhaus focuses on the issues posed by the increase of causal distance in assigning blame for breaches of international humanitarian law. We are unlikely see drones in the dock at the Hague any time soon, but who will be brought before the courts in the event of an AWS-committed war crime? The programmer of the software? This poses a challenge to the entire ethical framework of respect for individual rights, part of which is the promise ‘to hold those who violate these rights responsible for their deeds.’

Ben Goldsworthy, Lancaster University


Let us know what you think

Do previous instances of weapons regulation offer any useful concepts for governing lethal autonomous weapon systems?

Here is our first question on lethal autonomous weapon systems this month. If you have any thoughts about answers, let us know in the comments.


The question for me at least is whether or not we can draw parallels between regulation of the human and regulation of the machine. The problem here is that there are no clear and simple ways of holding a machine to account, so the question of responsibility and therefore regulation become problematic. We can hold a soldier to account for misusing a gun – we cannot do the same for a  machine. For one thing, they do not know, and cannot experience the concept of human death, so how can we even hold them to the same level of accountability when they cannot even understand the framework on which modern human ethics is built?   

Mike Ryder, Lancaster University 

 


Recently, I read Steven Pinker’s The Better Angels of our Nature. In it he considers why violence has declined over centuries. One part of it looks at weapons of mass destruction. For Pinker, the main reason chemical, biological and nuclear weapons are not used regularly is not because of international law concerns around high levels of collateral damage, but more because it would break a taboo on using them. Pinker suggests that the taboo is so powerful that using weapons of mass destruction are not even in the minds of military planners when considering war plans. Autonomous weapons have the potential to be as impactful as weapons of mass destruction, but without the horrendous collateral damage concerns. Would this create an equal taboo based on the human unease at delegating lethal decision-making? I think a taboo would be created, but the likely reducing in collateral damage would make any taboo weaker. Therefore taboo is unlikely to restrict any future use of autonomous weapons. 

In terms of treaty-based regulation, having been at the meetings of experts on lethal autonomous weapon systems at the UN, I think any meaningful ban on these weapons is unlikely. However, in recent years a number of informal expert manuals have been created on air and missile warfare, naval warfare, and cyber warfare. They have generally been well received, and their recommendations followed. I could imagine a situation in the future where similar ‘road rules’ are developed for autonomous weapons, interpreting the requirements of the law of armed conflict and international human rights law for such systems. This could result in more detailed regulation, as there is less watering down of provisions by states who want to score political points rather than progress talks. We will have to wait and see if this will happen. 

Joshua Hughes, Lancaster University 


 

Let us know what you think

Haas and Fisher – The evolution of targeted killing practices: Autonomous weapons, future conflict, and the international order

This week we begin our discussions of autonomous weapon systems. Following on from the discussions of the Group of Governmental Experts at the UN last November, more talks are taking place in February and April this year. For those not aware, an autonomous weapon system is that which can select and engage targets without human intervention – think a drone with the brain of The Terminator.

First, we are looking at ‘The evolution of targeted killing practices: Autonomous weapons, future conflict, and the international order’ by Michael Carl Haas and Sophie-Charlotte Fischer from Contemporary Security Policy, 38:2 (2017), 281–306. Feel free to check the article out and let us know what you think in the comments below.

Here’s what we thought:

 


I enjoyed this article, and the ways in which it seeks to engage with the future applications of AWS in what we might describe as ‘conventional’ wars with the use of targeted killings or ‘assassinations’ by drone likely to become more common.

From my own research perspective I am particularly interested in the authors’ approach to autonomy and autonomous thinking in machines (see 284 onwards). I agree with the authors that ‘the concept of “autonomy” remains poorly understood’ (285), but suggest that perhaps here the academic community has become too caught up in machinic autonomy. If we can’t first understand human autonomy, how can we hope to apply a human framework to our understanding of machines? This question to me, seems to be one that has been under-represented in academic thinking in this area, and is one I may well have to write a paper on!

Finally, I’d like to briefly mention the question of human vs machinic command and control. I was interested to see that the authors suggest AWS might not become ubiquitous in ‘conventional’ conflicts when we consider the advantages and disadvantages of their use for military commanders (297). To me, there is a question here of at what point does machinic intelligence or machine-thinking ‘trump’ the human? Certainly our technology as it stands to date still puts the human as superior in many types of thinking, yet I can’t believe that it will be too long before computers start to totally outsmart humans such that this will even remain a question.  There is also then the question of ‘training cost’. In a drawn out conflict, what will be easier and cheaper to produce: a robot fighter who will be already pre-programmed with training and so on, or the human soldier who requires an investment of time and resources, and who may never quite take on his or her ‘programming’ to the same level as the machine. Something to think about certainly…

Mike Ryder, Lancaster University


 

I quite liked this piece, as it is common to hear fellow researchers of autonomous weapons say that such systems will change warfare but then provide no discussion of how this will happen. Fortunately, this paper does just that. I particularly liked the idea that use of autonomous systems for ‘decapitation’ strikes against senior military, political, or terrorist leaders/influencers could not only reduce collateral damage overall, and the number of friendly deaths, but also the level of destruction a conflict could have in general. Indeed, I’ve heard a number of people suggest that present-day drones offer a chance at ‘perfect’ distinction, in that they are so precise that the person aimed at is almost always the person who dies with often little collateral damage. It is usually poor intelligence analysis that results in the wrong person being targeted in the first place that is responsible for the unfortunately high number of civilian deaths in the ‘drone wars’. Use of AI could rectify this, but also the use of autonomous weapons could reduce the need for substantial intelligence analysis if they were one day capable of identifying combatant status of ordinary fighters, and of identifying specific high-level personalities through facial or iris recognition. If this becomes possible, autonomous weapons could have the strategic impact of a nuclear bomb against the enemy fighters, without causing much collateral damage.

Joshua Hughes, Lancaster University


UPDATE: added 18th March 2019, written earlier

This article presents predictions on the impact of autonomous weapons on the future of conflict. Building on a ‘functional view’ of autonomy that distinguishes degrees of autonomy across different functional areas, such as ‘health management’, ‘battlefield intelligence’ and ‘the use of force’, the authors discuss the issues and incentives of applying different degrees to different functions. They also detail the US’ ongoing drone campaigns before extrapolating the trends seen within into a future of greater weapon autonomy. First, they see an increased focus on ‘leadership targeting’, believing that ‘autonomous weapons would be a preferred means of executing counter-leadership strikes, including targeted killings.’ Secondly, they propose such tactics as a necessary response to the resurgence of ‘hybrid warfare’, with ‘[a]ttacking leadership targets in-theatre…be[ing] perceived as a viable and effective alternative to an expansion of the conflict into the heartland of an aggressive state opponent’. The authors conclude with their belief that ‘advanced Western military forces’ “command philosophies” will militate against the employment of autonomous weapons, which require surrendering human control, in some types of targeted killing scenarios.

I found the article to have a rather unexpected utopian takeaway. Where a previous author proposed that a shift to swarm warfare would make ‘mass once again…a decisive factor on the battlefield’, this paper’s predict the development of a more scapel-like approach of targeted leadership killings. The thought of generals and politicians being make immediately responsible for their military adventures, rather than however many other citizens (and auxiliaries) they can place between them and their enemies, seems like a rather egalitarian development of statecraft. It reminded me, of all things, of the scene in Fahrenheit 9/11 in which the director asks pro-war congressmen to enlist their own children in the Army and is met with refusal. It’s easier to command others to fight and die on you and your government’s behalf, but the advent of the nuclear age presented the first time in which the generals had just as much ‘skin in the game’ as everyone else, and nukes remain unused. Perhaps this future of leadership targetting by tiny drones can achieve the same result, but without taking the rest of us along for the apocalyptic ride. The risk of a small quadcopter loaded with explosives flying through one’s office window seems like it would be a strong incentive for peacemaking, a potentially welcome by-product of the reduction of the ‘tyranny of distance’ (or, rather, the obviation of insulation) that the earlier author had discussed.

Ben Goldsworthy, Lancaster University


Let us know what you think in the comments below

Joh – The Undue Influence of Surveillance Technology Companies on Policing

This week we consider the relationship and impact between technology companies and police surveillance. The article we are reviewing is by Elizabeth E. Joh, and comes from the New York University Law Review, Vol.92 Sept 2017, 101-130.

Here’s what we thought:


This paper is interesting for the way in which it explores the marketisation of law enforcement and policy making, with technology manufacturers such as Taser and others gaining undue influence over what the police do, and the decisions that get made. These problems emerge from the increasing reliance on technology provided by a limited number of vendors, each of whom may have interests beyond the mere application of law. There also then seems to be a problem of unfair competition, wherein companies such as Taser gain new contracts on account of existing business relationships. We are getting to a stage then when not only does technology start to dictate use, but the design decisions behind certain new technologies are then influencing policy decisions, even though design decisions will not have been made on exclusively law-enforcement lines.  

Mike Ryder, Lancaster University 


As of the time of writing, the newspapers are full with articles on the Carpenter ruling, so this article seems especially timely.  From a policy standpoint, I personally feel that this is an important subject about which there is little public awareness yet, which is troublesome. The actions of industry seem like they could seriously threaten human rights and civil liberties, and it is important to scrutinise these practices with a public debate. The influence of private actors in public policy is of course not limited to law enforcement and juridical practices, but in these sectors human rights are especially at risk. The issues described will likely also become important issues in the world of defence technology as well. 

However, I considered the level of analysis in article not seem extremely substantive. I would have appreciated a more in-depth analysis of how the industry acts as an actor in this debate, and what the result are of their influence on law enforcement and juridical practices, and what the larger societal implications are of these practices. I would also be very interested in a more in-depth investigation in the view of law enforcement on these developments. The actual political and societal processes in play are not really analysed. However, as this is a legal article, that is also to be expected. 

The article is very US-based, both through the empirical cases and the legal analysis, as is a lot of literature on the subject. I would be very interested to read more on this European perspective, both on the actual use, and on the legal and political analysis, and to see whether there are any significant differences. Digo’s articles on this subject are very European-focused so they provide an interesting contrast, but the methodological and theoretical approach is very different. Most of the articles on European surveillance technology come from International Political Sociology it seems – which is a great theoretical framework, but I wish the subject was taken up more by other schools as well. 

Maaike Verbruggen, Vrije Universiteit Brussels 


This piece by Joh was really interesting. The increasing use of surveillance technologies by police services continues major trends two trends, firstly the militarisation of the police, and secondly the increasing influence of technology companies in job roles. The militarisation of the police is often poorly described as though the purchasing of body armour and helmets to protect officers in crowd control or counter-terrorism situations is a bad thing, and that the police want to go to war with the populace. However, the reality is more that technologies used by militaries when performing similar operations can easily migrate to police-use. For example, technologies used to assist a manhunt for a terrorist in a warzone can also be used in a man hunt for a murder suspect in a crowded city. However, as Joh suggests, the fact that in both situations technology firms exert undue influence on policy and obfuscation of how these technologies work when they have a public impact in unnerving. 

When we think of the increasing influence of technology companies, we often fall into Hollywood imaginaries where corporate technology giants have some sort of ideological bent towards unrivalled political power which requires masses of data and the sacrifice of privacy. Yet, the reality is that this increasing influence is often driven by market forces and a simple desire to keep up with, or ahead of, competitors. The increasing presence of technology in policing is also a reflection of its increasing presence in all of our lives, and the drive toward greater and greater automation. Years ago, beat cops were the eyes and ears of police forces, nowadays technological surveillance can do the same tasks without needed ever expensive pay or conditions. Yet, the fact that policing is a public service, and we associate policing with the presence of officers, means that we are uneasy about such automation, and the handing over of such tasks to machines. Perhaps the real story behind all of this is that humans simply prefer dealing with other humans whom they can develop trust and relationships with, rather than machines which you do not understand. 

Joshua Hughes, Lancaster University 


Let us know what you think in the comments below.

War and technology influence each other. Which has the greatest influence? 

After considering significant changes since WWII last week, this week we are looking at the relationship between war and technology. Both, of course, have been interlinked for years. Military research funding has contributed to many technologies we rely upon today, including the internet which you are reading this on!

Here are our thoughts:

 


War has typically been the biggest instigator of technological progress over the years, in particular with regards to fields of medicine and computing. It is true that certain technologies can potentially influence wars, or how they are fought (e.g. the nuclear warhead, the tank, the bomber, the submarine), but typically these technologies arise as a result of war, and not the other way round. Of course stockpiling masses of these technologies can potentially start a war, but having access to technology is not necessarily the same as putting technology to use.

Mike Ryder, Lancaster University


 

I think, perhaps, up until now, war (or the military-industrial complex, at least) has had a greater influence of technology. Possibly the biggest technological changes of the past century has come out of war, or military funding: nuclear power, and the internet. Indeed, DARPA have played a role in initial research seed funding for many important technologies

But, we see now that technology companies are moving ahead of government funded research. Companies like Apple, Google and Tesla only need to worry about technology, and have made so much money that they can fund enormous research projects beyond the capabilities of military-funded research programmes. I think now there is a change where militaries will be more influenced by technologies, than the influence they can exert on the companies themselves. However, I would think this will only be in relation to how forces communicate and operate. I doubt the influence will extend to military, or even strategic decision-making. As I’ve written about previously, I think the recent open-letters written by AI company heads will have little impact on military thinkgers.

Joshua Hughes, Lancaster University

 


War certainly influences technology to a great extent. They say that necessity is the mother of invention, and defending territory or protecting national interests is often perceived as one of the greatest necessities there is. Military research has led to a number of important inventions, such as the internet, radar, GPS, encryption, advanced computing, key breakthroughs in artificial intelligence, nuclear energy, spaceflight etc. However, their development, adoption and use is not the result of war alone, and many other factors, such as economic interests and civilian inventions also play a key role here. The military did a lot to advance communication technology, but they were not the only one to do so. Furthermore, a lot of technology has always been invented on the civilian side, which is especially true in the 21st century.

Therefore, I would personally say that technology affects war more than it does in reverse. Technology has the power to fundamentally change how wars are fought, which in return, can change how societies are structured. The Hittites were the first known army to have used the chariot, with which they conquered vast sections of the Middle East, which led to the fall of entire kingdoms. The stir-up (with which you can fight standing up) is not to be underestimated, and it has been argued this was the most important factor to the development of a feudal society in Western Europe, as it established the importance of horses and armour, which were only affordable as the nobility. The invention of the longbow in return empowered the infantry, and shifted the balance back to the lower and middle classes. In the future, due to communications technology such as PGMs and potentially autonomy the importance of having actual soldiers on the battlefield might decrease, which could alter the how risk-benefit calculation of war and affect militaristic attitudes in society.

Nonetheless it is important to remember that it is an interplay, and the history of technology and war are interwoven, but also affected by a million other key variables, such as economic factors, civilian inventions, political governance, and societal attitudes.

 

Maaike Verbruggen, Vrije Universiteit Brussels


The relationship of influence between war and technology is intrinsically synergistic on many levels. War can instigate technological innovation out of battlefield necessity, can repurpose or even redefine certain technologies; and in doing so can alter/challenge/broaden our perspectives and understandings of technology itself. Similarly, technology can have the same level of influence on our perspectives and understandings of war, it can spur new or alternative modes/visions of warfare, be enabling to war, productively/disruptively influence strategy and influence the very course of a conflict itself.

The reactive, almost self-perpetuating relationship between war and technology is so intricately entangled that it seems impossible to delineate which might have the greatest influence on the other. I think the influencing relationship between the two is context-dependent and therefore very changeable. However, I am inclined to reason that technology may (at present) be having the greater influence in the seemingly reciprocal relationship between the two. Technology has long exerted influence in war, however, I think what we are seeing today is a set of new, rapidly shifting contexts (and a wider array of domains) in which this influence has the margin to play out. The sheer number of ways in which certain technologies are opening up new avenues for war (or aspects of it), may itself be indicative of the level of influence being exerted.

I think that one of the most prominent ways in which we are seeing this influence play out is through communication technology. Not only are communication technologies such as social media providing new platforms through which conflictual situations might be influenced, but as these virtual spaces/technologies are increasingly harnessed to wage a multitude of wars – of influence, perception, narrative, ideology, propaganda, (mis)information – they are not only potentially influencing war, they are bleeding into it by becoming hosts to certain elements of it. I think in this sense, the uncertain borderland between technology and war is quite fascinating, but it makes it all the more challenging to decide which might hold greater influence over the other.

Anna Dyson, Lancaster University 


What do you think? Let us know in the comments below