Should robots be allowed to target people? Based on combatant status?

Here is our second question this month on autonomous weapon systems. Due to space reasons in the title I did paraphrase it slightly. Here is the full question which went out to all network members:

If the technology within a lethal autonomous weapon system can comply with the law of armed conflict, should they be allowed to target people? Should they be able to target people based on their membership of a group, for example, membership of an enemy military, or a rebel group? 

Here’s what we thought:

This question poses a massive moral and ethical dilemma, and not just for autonomous weapon systems (AWS). Membership of any organisation, including notably, the State, has always been problematic, but in a ‘traditional’ military setting, we tend to work around this by drawing a clear distinction between those in uniform and those not. Of course this construct is undermined as soon as you introduce the partisan, or the non-uniformed fighter, and as we have seen in recent years, terrorist organisations seek to avoid marking their members completely. So there is the problem of identification to start with… But then things get more tricky when you come to question the terms of membership, or the consent given by any ‘member’ of an organisation to be a part of said organisation, and quite what that membership entails.

Take citizenship for example: we don’t formally ‘sign up’, but we are assumed to be a part of said organisation (i.e. the State) so would be targets of the ‘group’ known as the State in the terms set by this question. Take this argument one step further and you could have say, ‘Members of the TTAC21 reading group’. On first glance, members of our reading group might be ‘legitimate’ targets, however each of our ‘members’ has different levels of consent and participation within the group. Some for example have come along to meetings in person, or have Skyped in for an hour or two. Meanwhile others have provided comment for the blog, while others are yet to contribute anything. Are each of these members ‘members’ of the same level? How and why can, or indeed should, we compare any one member to another? And let’s not forget the question of motivation. Some of us are members because we are actively working in the field, while some of us have different level of interest or motivation. Does that then mean that each of us should be tarred with the same brush and classified in the same way when it comes to targeting members of our specific group?

This question is far more complex than it seems!

Mike Ryder, Lancaster University


This question really gets to the nub of why some people are concerned with autonomous weapon systems. If something is possible, should we do it? At the recent Group of Governmental Experts meeting on Lethal Autonomous Weapon Systems at the UN in November 2017, Paul Scharre put it something like this: If we could have a perfectly functioning autonomous weapon system in the future, where would we still want humans to make decisions?

It seems that most people do want human control over lethal decision-making, although some are willing to delegate this to a machine if it were to become a military necessity. However, many are dead-set against any such delegation. I think a major aspect of this is trust. Are we willing to trust our lives to machines? Many people are already doing so in prototype and beta-testing self-driving cars, and in doing so are also putting the lives of nearby pedestrians in the ‘hands’ of these self-driving cars. For many, this is unnerving. Yet, we put our lives in the hands of fellow drivers every time we go out on the road. We all know this, but are all comfortable with this fact. Perhaps we will not be happy to delegate our transport to machines until we can trust them. I think if self-driving cars were shown to be functioning perfectly, people would begin to trust them.

With lethal autonomous systems, the stakes are much higher.  A self-driving car may take the wrong turn, an autonomous weapon may take the wrong life. This is obviously a huge issue, that people may never become comfortable with. But, here we are hypothetically considering those which would function perfectly. I still think it will come down to whether people will trust a system to make the correct decision.  And yet, there are still issues around whether a machine could ever comprehend every possible situation it could be in. An often used example is an enemy soldier who has fallen asleep on guard duty. The law of armed conflict would allow combatants to kill this sleeping soldier simply for being a member of the enemy side. Yet, it is difficult for us to accept when there is the possibility of capture. Here, this would not be a legal requirement under the law of armed conflict, but may be a moral desire. If programming of autonomous weapons can go beyond the law to take ethical decisions into account as well, trust in the lethal decision-making capability of machines may grow resulting in society being ok with machines performing status-based targeting.

Joshua Hughes, Lancaster University


UPDATE: This entry added 04/03/2019

As Mike has said, the issue here boils down to how we would define ‘membership’, and the way it would be determined in the field. An autonomous weapon system would require some form of machine learning in order to delineate between valid and non-valid targets based on the evidence it can gather in each case. Machine learning can either be supervised, where categories are provided and the algorithm attempts to determine which one best covers a given item, or unsupervised, where the algorithm groups items based on whichever characteristics it finds best distinguishes them, and the categories emerge dynamically from this process of classification. Both methods are fraught with peril when applied to social media advertising, let along the application of lethal force.

Take a supervised training regime, where the AWS would be provided with a list of criteria that would authorise the use of force, such as a list of proscribed organisations and their uniforms to compare against, or a dataset of enemy combatants’ faces to perform facial recognition on. The applications of lethal force would be only as good as the intel, and the experience of US no-fly lists shows just much faith one should have in that. If the model is insufficiently precise (e.g. ‘apply lethal force if target is holding a weapon’), then all of a sudden a child with a toy gun is treated as an attacking Jihadi, much to the consternation of its former parents. In an effort to avoid these false-positives, one may be tempted to go too far the other way, handicapping the rapid analytical and decision-making powers that are often cited as an advantage of AWSes with over-restrictive classifiers. If a potential threat emerges that does not fit into any preordained model, such as a non-uniformed combatant, it will be ignored—a false-negative.

An unsupervised training regime would just as dangerous, if not more so. As Shaw points out in his discussion of ‘dividuals’, this would represent a sea change in legal norms governing force. Not only would decisions be made based solely on the aggregate behaviour of a target, without oversight or appreciation of wider context, but we would be offloading a moral responsibility to display the reasoning behind such actions to opaque algorithms. Unsupervised training is also prone to misclassification—consider the work of Samim Winiger—and intentional manipulation—as in the case of the Microsoft AI who was reduced to a Holocaust-denying Trump supporter within a day of being released onto Twitter. Perhaps in the future, we can all look forward to a new Prevent strategy aimed at countering the growing threat of AI radicalisation.

Ben Goldsworthy, Lancaster University

What do you think?

Leveringhaus – Autonomous weapons mini-series: Distance, weapons technology and humanity in armed conflict

This week we are considering Distance, weapons technology and humanity in armed conflict from the Autonomous Weapons mini-series over on the Humanitarian Law & Policy blog from the International Committee of the Red Cross. In it, the author discusses how distance can affect moral accountability, with particular focus on drones and autonomous weapons. Please take a look yourself, and let us know what you think in the comments below.


This blog offers interesting insight into concepts of ‘distance’ in warfare. In it, the author distinguishes between geographical distance and psychological distance, and also then brings in concepts of causal and temporal distance to show the complex inter-relations between the various categories.

One of the key questions raised in the article is: ‘how can one say that wars are fought as a contest between military powers if killing a large number of members of another State merely requires pushing a button?’ The implication here, to me at least (as I have also suggested in my comments in other blogs), is a need to reimagine or reconstruct the concept of ‘warfare’ in the public consciousness. We seem stuck currently in a position whereby memories of the two world wars linger, and the public conceive of war as being fought on designated battlefields with easily recognisable sides.

While I agree with much of what the author says, where this article falls down I think is in the conclusion that ‘the cosmopolitan ideal of a shared humanity is good starting point for a wider ethical debate on distance, technology, and the future of armed conflict.’ While I agree with the author’s stance in principle, his argument relies on both sides in any given conflict sharing the same ethical framework. As we have seen already with suicide bombings and other acts of terrorism, this is no longer an ‘even’ battlefield – nor indeed is it a battle fought between two clearly delineated sides. While such disparities exist, I find it hard to believe any sort of balance can be struck.

Mike Ryder, Lancaster University



I found this piece, and its discussion of different types of distance both interesting and illuminating. I’ve spoken with a number of students recently about distance, and how that affects their feelings regarding their own decision-making, and the consequences of it. I found it really interesting that a large proportion of students were quite accepting of the idea that moral distance makes one feel less responsible for something that happens. But, many of the same students also wanted people held responsible for their actions regardless of that moral distance. So this gives us a strange situation where people who feel no responsibility should be held responsible. I don’t think this position is unusual. In fact, I think most people around the world would agree with this position, despite it being rather paradoxical.

It is clear that from a moral perspective, an accountability gap could be created. But, as ethics and morals are flexible and subjective, one could also argue that there is no moral accountability gap. Fortunately, law is more concrete. We do have legal rules on responsibility. We’ve seen that a number of autonomous vehicle manufacturers are going to take responsibility for their vehicles in self-driving modes. However, it is yet to be seen if autonomous weapon system manufacturers will follow this lead.

Joshua Hughes, Lancaster University

Update added 25/02/2019, written earlier

This short article explores the impact of the introduction of autonomous weapon systems on the bases of distance, be that geographical, psychological, causal or temporal distance. Contemporary drone warfare is given as an example of a the way in which a new technology allows war to be conducted with an increased geographical distance, but that the incidence of PTSD amongst such pilots shows that the same is not true of the psychological distance. Leveringhaus focuses on the issues posed by the increase of causal distance in assigning blame for breaches of international humanitarian law. We are unlikely see drones in the dock at the Hague any time soon, but who will be brought before the courts in the event of an AWS-committed war crime? The programmer of the software? This poses a challenge to the entire ethical framework of respect for individual rights, part of which is the promise ‘to hold those who violate these rights responsible for their deeds.’

Ben Goldsworthy, Lancaster University

Let us know what you think

Do previous instances of weapons regulation offer any useful concepts for governing lethal autonomous weapon systems?

Here is our first question on lethal autonomous weapon systems this month. If you have any thoughts about answers, let us know in the comments.

The question for me at least is whether or not we can draw parallels between regulation of the human and regulation of the machine. The problem here is that there are no clear and simple ways of holding a machine to account, so the question of responsibility and therefore regulation become problematic. We can hold a soldier to account for misusing a gun – we cannot do the same for a  machine. For one thing, they do not know, and cannot experience the concept of human death, so how can we even hold them to the same level of accountability when they cannot even understand the framework on which modern human ethics is built?   

Mike Ryder, Lancaster University 


Recently, I read Steven Pinker’s The Better Angels of our Nature. In it he considers why violence has declined over centuries. One part of it looks at weapons of mass destruction. For Pinker, the main reason chemical, biological and nuclear weapons are not used regularly is not because of international law concerns around high levels of collateral damage, but more because it would break a taboo on using them. Pinker suggests that the taboo is so powerful that using weapons of mass destruction are not even in the minds of military planners when considering war plans. Autonomous weapons have the potential to be as impactful as weapons of mass destruction, but without the horrendous collateral damage concerns. Would this create an equal taboo based on the human unease at delegating lethal decision-making? I think a taboo would be created, but the likely reducing in collateral damage would make any taboo weaker. Therefore taboo is unlikely to restrict any future use of autonomous weapons. 

In terms of treaty-based regulation, having been at the meetings of experts on lethal autonomous weapon systems at the UN, I think any meaningful ban on these weapons is unlikely. However, in recent years a number of informal expert manuals have been created on air and missile warfare, naval warfare, and cyber warfare. They have generally been well received, and their recommendations followed. I could imagine a situation in the future where similar ‘road rules’ are developed for autonomous weapons, interpreting the requirements of the law of armed conflict and international human rights law for such systems. This could result in more detailed regulation, as there is less watering down of provisions by states who want to score political points rather than progress talks. We will have to wait and see if this will happen. 

Joshua Hughes, Lancaster University 


Let us know what you think

Haas and Fisher – The evolution of targeted killing practices: Autonomous weapons, future conflict, and the international order

This week we begin our discussions of autonomous weapon systems. Following on from the discussions of the Group of Governmental Experts at the UN last November, more talks are taking place in February and April this year. For those not aware, an autonomous weapon system is that which can select and engage targets without human intervention – think a drone with the brain of The Terminator.

First, we are looking at ‘The evolution of targeted killing practices: Autonomous weapons, future conflict, and the international order’ by Michael Carl Haas and Sophie-Charlotte Fischer from Contemporary Security Policy, 38:2 (2017), 281–306. Feel free to check the article out and let us know what you think in the comments below.

Here’s what we thought:


I enjoyed this article, and the ways in which it seeks to engage with the future applications of AWS in what we might describe as ‘conventional’ wars with the use of targeted killings or ‘assassinations’ by drone likely to become more common.

From my own research perspective I am particularly interested in the authors’ approach to autonomy and autonomous thinking in machines (see 284 onwards). I agree with the authors that ‘the concept of “autonomy” remains poorly understood’ (285), but suggest that perhaps here the academic community has become too caught up in machinic autonomy. If we can’t first understand human autonomy, how can we hope to apply a human framework to our understanding of machines? This question to me, seems to be one that has been under-represented in academic thinking in this area, and is one I may well have to write a paper on!

Finally, I’d like to briefly mention the question of human vs machinic command and control. I was interested to see that the authors suggest AWS might not become ubiquitous in ‘conventional’ conflicts when we consider the advantages and disadvantages of their use for military commanders (297). To me, there is a question here of at what point does machinic intelligence or machine-thinking ‘trump’ the human? Certainly our technology as it stands to date still puts the human as superior in many types of thinking, yet I can’t believe that it will be too long before computers start to totally outsmart humans such that this will even remain a question.  There is also then the question of ‘training cost’. In a drawn out conflict, what will be easier and cheaper to produce: a robot fighter who will be already pre-programmed with training and so on, or the human soldier who requires an investment of time and resources, and who may never quite take on his or her ‘programming’ to the same level as the machine. Something to think about certainly…

Mike Ryder, Lancaster University


I quite liked this piece, as it is common to hear fellow researchers of autonomous weapons say that such systems will change warfare but then provide no discussion of how this will happen. Fortunately, this paper does just that. I particularly liked the idea that use of autonomous systems for ‘decapitation’ strikes against senior military, political, or terrorist leaders/influencers could not only reduce collateral damage overall, and the number of friendly deaths, but also the level of destruction a conflict could have in general. Indeed, I’ve heard a number of people suggest that present-day drones offer a chance at ‘perfect’ distinction, in that they are so precise that the person aimed at is almost always the person who dies with often little collateral damage. It is usually poor intelligence analysis that results in the wrong person being targeted in the first place that is responsible for the unfortunately high number of civilian deaths in the ‘drone wars’. Use of AI could rectify this, but also the use of autonomous weapons could reduce the need for substantial intelligence analysis if they were one day capable of identifying combatant status of ordinary fighters, and of identifying specific high-level personalities through facial or iris recognition. If this becomes possible, autonomous weapons could have the strategic impact of a nuclear bomb against the enemy fighters, without causing much collateral damage.

Joshua Hughes, Lancaster University

UPDATE: added 18th March 2019, written earlier

This article presents predictions on the impact of autonomous weapons on the future of conflict. Building on a ‘functional view’ of autonomy that distinguishes degrees of autonomy across different functional areas, such as ‘health management’, ‘battlefield intelligence’ and ‘the use of force’, the authors discuss the issues and incentives of applying different degrees to different functions. They also detail the US’ ongoing drone campaigns before extrapolating the trends seen within into a future of greater weapon autonomy. First, they see an increased focus on ‘leadership targeting’, believing that ‘autonomous weapons would be a preferred means of executing counter-leadership strikes, including targeted killings.’ Secondly, they propose such tactics as a necessary response to the resurgence of ‘hybrid warfare’, with ‘[a]ttacking leadership targets in-theatre…be[ing] perceived as a viable and effective alternative to an expansion of the conflict into the heartland of an aggressive state opponent’. The authors conclude with their belief that ‘advanced Western military forces’ “command philosophies” will militate against the employment of autonomous weapons, which require surrendering human control, in some types of targeted killing scenarios.

I found the article to have a rather unexpected utopian takeaway. Where a previous author proposed that a shift to swarm warfare would make ‘mass once again…a decisive factor on the battlefield’, this paper’s predict the development of a more scapel-like approach of targeted leadership killings. The thought of generals and politicians being make immediately responsible for their military adventures, rather than however many other citizens (and auxiliaries) they can place between them and their enemies, seems like a rather egalitarian development of statecraft. It reminded me, of all things, of the scene in Fahrenheit 9/11 in which the director asks pro-war congressmen to enlist their own children in the Army and is met with refusal. It’s easier to command others to fight and die on you and your government’s behalf, but the advent of the nuclear age presented the first time in which the generals had just as much ‘skin in the game’ as everyone else, and nukes remain unused. Perhaps this future of leadership targetting by tiny drones can achieve the same result, but without taking the rest of us along for the apocalyptic ride. The risk of a small quadcopter loaded with explosives flying through one’s office window seems like it would be a strong incentive for peacemaking, a potentially welcome by-product of the reduction of the ‘tyranny of distance’ (or, rather, the obviation of insulation) that the earlier author had discussed.

Ben Goldsworthy, Lancaster University

Let us know what you think in the comments below

Singer – Corporate Warriors: The Rise and Ramifications of the Privatized Military Industry

This week, we consider Peter Singer’s keystone piece in the study of private military contractors (PMCs). It is important to distinguish them from mercenaries, who are usually individuals employed to fight, whereas a PMC usually has a corporate business structure and are employed to provide a whole host of military-related services, from intelligence gathering and analysis, combat support, and providing security.

Although this is the first post in a series of comments to pieces on the theme of ‘Industry and Security’, this will be the final post of 2017. We are taking a break over Christmas, and will be back posting in January. We would like to take this opportunity to thank you all for reading and contributing to posts, and we look forward to more fascinating discussion in the new year.

On to what we think of the article…

This excellent article explores some of the many issues surrounding ‘privatized military firms’ (PMFs) operating within warzones across the globe. According to the author, these PMFs represent the ‘new business form of war’ in which market forces play an increasingly important role in the global military, and political landscape. Indeed, it could be argued they change the landscape completely, for with the rise of PMFs so State accountability takes a back-seat, and war loses any remaining ideological  motivation it may have previously had.

One particularly interesting question for me, is the recruitment and retention of soldiers / operators / employees (call them what you will!) within these PMFs. While the author raises the question of responsibility and the problematic of balancing ‘getting things done’ with having a good human rights record, there is also then the issue of responsibility when it comes to the actual training of these troops in the first place.

Here in the UK, the National Health Service pays to train doctors and nurses, and yet once they have been trained, these same doctors and nurses are effectively free to go and work wherever they so choose. This same problematic would seem to arise with the modern-day PMFs. If an militarily advanced Western State invests hundreds of thousands, if not millions, of pounds in training high quality soldiers, what happens when these same soldiers decide to work for a PMF? What can States do to stop these same expensive soldiers one day coming back and fighting for the ‘enemy’ further down the line? Where does responsibility for these soldiers begin and end? And how on Earth can you hope to hold a PMF, and its ‘employees’ to account?

Mike Ryder, Lancaster University

The article is a little bit older, so the question is of course what relevance it still has for the present-day situation.  The article came at a time where much of the debate on global conflicts centered on warlords and civil wars in the Global South, while current analysis of warfare has a very different focus. However, it is one of the earliest articles describing the role of private military companies in security practices, and as such has been very important for that field. For people interested in more articles on this subject, I would recommend the work of Anna Leander.

I appreciate how the author points out the larger historical trends towards privatisation of government services, the transformation of type of conflicts, and the effects of the end of Cold War on military systems. It creates a clear picture on how private military industry has developed. I think a larger discussion on the influence of globalisation would have also fit well into this picture, as well as an analysis of the changing structure of the international arms trade in the 1990s. Considering it is written in 2002 however, it is remarkable how many of the implications and problems mentioned – such as a lack of oversight, imperialism by invitation or human rights violations – have also occurred during the US invasion of Iraq, which saw a high amount of private military contractors too.

I find the concept of Private Military Industry as used by the author very slippery however. Singer purposefully talks about “industry” instead of corporations to include actors offering other types of military services, as well as the overall industry instead of subsections.  But what falls under this exactly? When does something become corporate, only when it is registered as an official company? That is a very Western view on what falls under the private sector as it ignores the informal economy, and thus does not necessarily apply worldwide. Does this include all actors involved in war working for a profit? Smuggling is a huge component in many conflicts, and it often cannot be precisely determined whether resource extraction and sales are the cause of a conflict or just a means to finance conflict – when do profit-oriented motives of warlords turn them into private military actors? Defence companies have always played a large role in warfare, from supplying weapons to maintaining and sometimes operating them (as the author rightly points out), but then what are the new developments? Where does the military-entertainment complex fit in here? Of course, a lack of an airtight framework is one of the lacunas that Singer points out, but he does not really criticise the concept or define it more closely. If everything is private military industry, the concept is analytically meaningless.

Maaike Verbruggen, Vrije Universiteit Brussels

The idea of the corporate warrior fascinates me no end. It is a clear example of the state monopoly on violence crumbling, and the increasing capabilities which powerful individuals can have at their finger tips.

Most of the discussion about private military companies focusses on a corporate-industrial-military complex, however a couple of years ago there were discussions about the potential for their use for humanitarian reasons. During the lightening ISIS advance, a large group of the Yezidi group were trapped on Mount Sinjar. At the time, it was politically difficult for any state to deploy military forces for a humanitarian intervention. However, financially powerful individuals could have organised their own humanitarian intervention through the use of a PMC.

This would have been a unique moment. The fact that it is even possible shows that the state monopoly on violence is long gone. The ramifications of this could be that financially powerful individuals could use PMC power not just for their own security, but to realise their political ambitions as well. Potentially, this could result in a larger number of civil wars, rebellions, or even annexation and fiefdoms and created all using PMC power and purely because somebody with enough money and desire wanted it. What then will become of the international system when some big players do not play by the rules of traditional statehood? Or quasi-states are created purely as a toy for the rich and powerful?

Joshua Hughes, Lancaster University

Let us know what you think in the comments below.

War and technology influence each other. Which has the greatest influence? 

After considering significant changes since WWII last week, this week we are looking at the relationship between war and technology. Both, of course, have been interlinked for years. Military research funding has contributed to many technologies we rely upon today, including the internet which you are reading this on!

Here are our thoughts:


War has typically been the biggest instigator of technological progress over the years, in particular with regards to fields of medicine and computing. It is true that certain technologies can potentially influence wars, or how they are fought (e.g. the nuclear warhead, the tank, the bomber, the submarine), but typically these technologies arise as a result of war, and not the other way round. Of course stockpiling masses of these technologies can potentially start a war, but having access to technology is not necessarily the same as putting technology to use.

Mike Ryder, Lancaster University


I think, perhaps, up until now, war (or the military-industrial complex, at least) has had a greater influence of technology. Possibly the biggest technological changes of the past century has come out of war, or military funding: nuclear power, and the internet. Indeed, DARPA have played a role in initial research seed funding for many important technologies

But, we see now that technology companies are moving ahead of government funded research. Companies like Apple, Google and Tesla only need to worry about technology, and have made so much money that they can fund enormous research projects beyond the capabilities of military-funded research programmes. I think now there is a change where militaries will be more influenced by technologies, than the influence they can exert on the companies themselves. However, I would think this will only be in relation to how forces communicate and operate. I doubt the influence will extend to military, or even strategic decision-making. As I’ve written about previously, I think the recent open-letters written by AI company heads will have little impact on military thinkgers.

Joshua Hughes, Lancaster University


War certainly influences technology to a great extent. They say that necessity is the mother of invention, and defending territory or protecting national interests is often perceived as one of the greatest necessities there is. Military research has led to a number of important inventions, such as the internet, radar, GPS, encryption, advanced computing, key breakthroughs in artificial intelligence, nuclear energy, spaceflight etc. However, their development, adoption and use is not the result of war alone, and many other factors, such as economic interests and civilian inventions also play a key role here. The military did a lot to advance communication technology, but they were not the only one to do so. Furthermore, a lot of technology has always been invented on the civilian side, which is especially true in the 21st century.

Therefore, I would personally say that technology affects war more than it does in reverse. Technology has the power to fundamentally change how wars are fought, which in return, can change how societies are structured. The Hittites were the first known army to have used the chariot, with which they conquered vast sections of the Middle East, which led to the fall of entire kingdoms. The stir-up (with which you can fight standing up) is not to be underestimated, and it has been argued this was the most important factor to the development of a feudal society in Western Europe, as it established the importance of horses and armour, which were only affordable as the nobility. The invention of the longbow in return empowered the infantry, and shifted the balance back to the lower and middle classes. In the future, due to communications technology such as PGMs and potentially autonomy the importance of having actual soldiers on the battlefield might decrease, which could alter the how risk-benefit calculation of war and affect militaristic attitudes in society.

Nonetheless it is important to remember that it is an interplay, and the history of technology and war are interwoven, but also affected by a million other key variables, such as economic factors, civilian inventions, political governance, and societal attitudes.


Maaike Verbruggen, Vrije Universiteit Brussels

The relationship of influence between war and technology is intrinsically synergistic on many levels. War can instigate technological innovation out of battlefield necessity, can repurpose or even redefine certain technologies; and in doing so can alter/challenge/broaden our perspectives and understandings of technology itself. Similarly, technology can have the same level of influence on our perspectives and understandings of war, it can spur new or alternative modes/visions of warfare, be enabling to war, productively/disruptively influence strategy and influence the very course of a conflict itself.

The reactive, almost self-perpetuating relationship between war and technology is so intricately entangled that it seems impossible to delineate which might have the greatest influence on the other. I think the influencing relationship between the two is context-dependent and therefore very changeable. However, I am inclined to reason that technology may (at present) be having the greater influence in the seemingly reciprocal relationship between the two. Technology has long exerted influence in war, however, I think what we are seeing today is a set of new, rapidly shifting contexts (and a wider array of domains) in which this influence has the margin to play out. The sheer number of ways in which certain technologies are opening up new avenues for war (or aspects of it), may itself be indicative of the level of influence being exerted.

I think that one of the most prominent ways in which we are seeing this influence play out is through communication technology. Not only are communication technologies such as social media providing new platforms through which conflictual situations might be influenced, but as these virtual spaces/technologies are increasingly harnessed to wage a multitude of wars – of influence, perception, narrative, ideology, propaganda, (mis)information – they are not only potentially influencing war, they are bleeding into it by becoming hosts to certain elements of it. I think in this sense, the uncertain borderland between technology and war is quite fascinating, but it makes it all the more challenging to decide which might hold greater influence over the other.

Anna Dyson, Lancaster University 

What do you think? Let us know in the comments below

Zeitzof – How Social Media Is Changing Conflict

This week, we are looking at social media. Considering that facebook and twitter have changed the world in less than 10 years, there was obviously going to be some impact on our areas of study.

The work we are looking at is “How Social Media Is Changing Conflict” by Thomas Zeitzof, Journal of Conflict Resolution 2017, Vol. 61(9) 1970-1991. The article is available here.

Without further ado, here’s what we thought:

In this article the author aims to frame social media within a context of modern conflict, citing examples of how social media has influenced world events such as the rise of ISIS, the Russian annexation of Crimea, and election of Donald Trump in the U.S.

While this article is certainly useful in drawing attention to the role of social media in conflict, it does seem to confuse cause and effect, and the relationship between technology and use. In one section, the author claims:

‘Communication technology advances do not happen in a vacuum. Rather, they are correlated with advances in military technology and changes in the economy more generally.’  (1973)

The suggestion here seems to be that ‘communication technology’ (i.e. social media) is in some way correlating with military technology, and is being proactively developed and ‘weaponised’ by military forces. Yet this seems to confuse the point. The world’s largest social network, Facebook, is an American creation, yet as the author points out, is being put to use in ways that run strictly counter to American interests. Here, the author is confusing technology with use, suggesting that Facebook and its ilk has developed in line with advances in military technology. However the relationship is far more complex than that, and to claim a clear correlation is to misunderstand technological development and ‘progress’, and the way that these technologies are put to use. Quite simply, you cannot talk about Facebook in the same way you talk about a gun. While they can both certainly be ‘weaponised’, they work, and are used, in completely different ways.

Mike Ryder, Lancaster University

I found this article interesting. I particularly liked the examination of state actors, or non-state actors paid by a state to use social media in a way which benefits a particular state. The author notes the Chinese ’50 cent army’ who are paid the nominal amount for each pro-Chinese message posted on a microblogging site in order to allow the Chinese state to direct conversations online.

Another interesting aspect that is missing from this article (possibly due to publishing deadlines) is the influence of companies such as Cambridge Analytica in recent elections and referenda. This is not so much to do with conflict, yet. It is not unimaginable that in much the same way that revolutionary political votes have been made recently, that revolutionary movements could be encouraged using similar methods through social media. Indeed, it is not beyond the realms of possibilty that such movements could be encouraged to become violent also.

Something which struck me whilst reading the article was the consideration that the major international players of Russia and China both use people to influence online conversations in the wider world, but what of Western powers? The UK does have the secretive 77th Brigade, which appears to be more focussed on stabilisation, than its peers in the US and Israel. That obviously raises the question of what the US is doing (Israeli actions are covered in the article), if it is not focussed on stabilisation? Whatever it is doing, it doesn’t seem to have done anything to loosen the grip of authoritarian adversaries in Moscow or Beijing.

Joshua Hughes, Lancaster University

I found it very difficult to review this article as I absolutely do not understand what point the author is trying to make, besides that social media affects politics (he calls this conflict but in practice describes mainly election and protests). He tries to present a framework, but the ultimate purpose of this is still unclear to me, as it combines both characteristics of social media, and recommendations on how to study social media. Furthermore, it is completely filled with platitudes such as social media reduces the cost of information spreading. How novel.

The article itself is all over the place, with no clear red line or a narrative. It is filled with random and unsystematic anecdotes, historical examples and mentions of studies, and in each section I struggle to see what point the author is trying to make. Worse though is how the author completely misrepresents the debate on the subject. He claims the issue is understudied, but the subject of social media is extremely popular in both academic and popular literature. The article mentions the January 2017 Women’s March, so it must have been submitted after that – therefore the fact that he frames this falsely as understudied cannot be the fault of long submission cycles, as it was popular long before that. Even worse than that is his complete neglect of substantive bodies of literature on social media and conflict. There is a large body of literature out there on subjects such as participation of militaries on social media, the military-entertainment complex, military portrayal in video games, effect of social media on the framing of war to the home-front, how embedded journalism affects portrayal of war, the large spectrum of cyber operations from social engineering on social media to hacking power plants, to what extent social media lets the victims of war be heard in Western media, the effect of social media on public attitudes on war, the role that the possibility of immediate public backlash due to social media affects military tactics, the role of communication technology in warfare, etc. How can you claim it is understudies and present a framework to analyse the relationship between social media and conflict if you in practice base it on a tiny-tiny subset of conflict: political protests and elections? It is not that the author explicitly states that he only covers those two factors of conflict, as he mentions other aspects of conflict here and there, such as the social media strategy of Israel and Palestine during the 2012 War in the Gaza Strip.

The author furthermore consistently makes the exact mistakes he warns to avoid. First, he consistently misrepresents or even misunderstands different media dynamics, by conflating different categories of social media (for instance when he talks about alternative, non-traditional and social media interchangeably, in contrast to mainstream, tradition and non-interactive media), or different research questions, subjects or modes of analysis (see table 1 or his takeaways), by presenting complete platitudes as novel insights (see the framework), or by lacking a critical analysis and distinction of the different groups in play in media (for example there exists a huge range of groups between elite and mass). The author often does not truly seem to grasp what he is talking about: For example, how can he distinguish between military and communications technology like they are two separate things? Communications technology has been THE military revolution of the past 30 years. Second, he warns not to focus too much on whether social media favours activists or governments, but goes into this in-depth in the article, while the general debate has long moved on from this subject to ask more nuanced and in-depth questions. Third, he warns not to take any data as an unlimited unrepresented firehose. Meanwhile the author called the election of Trump, the Crimean annexation and the rise of ISIS some of the most significant geopolitical events of the 21st century. ISIS is almost defeated by now, we will have to see what the annexation will mean long term, and Trump so far seems to be a lame duck and shows the resilience of the US institutions. These points can all be argued, but it is clear that we do not know what the long-term effects will be, and his claim is therefore overly strong claims on no data whatsoever.

All-in-all this article ignores a gigantic part of the literature, develops a framework full of platitudes with no use whatsoever, and does not truly seem to understand the matter they are talking about. The author has written earlier articles on more specific topics within the subject matter, and social media and political violence is his expertise, so he obviously knows a lot about it. I do not understand how this then leads to this article which simply lacks nuanced analysis and useful takeaways. But maybe I am just fundamentally misunderstanding something here?


Maaike Verbruggen, Vrije Universiteit Brussels

I found this article interesting and definitely worthwhile as an introduction to the role of social media in conflict. The length of the article and the references to historical examples made it an easy and enjoyable read. Yet, as Maaike refers to, I think the title “How Social Media is Changing Conflict” is strange considering the article seems more focused on how social media is affecting politics rather than conflict. Furthermore, some of the claims in the article are hardly groundbreaking. For example, it is self-evident that social media enables the rapid sharing of information and anyone with any experience of social media can identify this. Yet, I believe this article is deliberately written as an introductory overview to some of the issues concerning social media and conflict/politics and therefore I am not overly critical of the specifics.

The author alluded to several ways in which social media has an impact on politics/conflict and the reader will naturally be drawn to the issues that are related to their research focus. Personally, I find the enhanced role of social media in recruiting fighters for non-state actors particularly intriguing. Currently, the ICRC regards a person that undertakes recruitment and propaganda activities as not performing a continuous combat function or directly participating in hostilities. Consequently, that individual is not regarded as a lawful target under IHL. Yet, if social media continues to play such a crucial role in recruitment for armed conflicts, would this necessitate an adaption of the rules of targeting to reflect this importance? The recent drone strike on UK citizen/member of ISIS Sally Jones and the discussions regarding the legality of the strike is interesting in this respect.

Liam Halewood, Liverpool John Moores University 

Thomas Zeitzoff’s article ‘How Social Media is Changing Conflict’ sets out to provide scholars with a theoretical framework for understanding social media and its influence on conflict. Overall, I found the article to be generally informative but rather simplistic in approach. The article comes across as a ‘primer’ for those new to the subject area; something that is echoed throughout the work as the author proposes questions for the concerned scholar to take into consideration when embarking on a study of this particular topic. However, the framework the author forms conjured up some interesting thoughts for me regarding how we think about the relationship between communication technology and conflict. The framework identifies four ‘effects’ of communication technologies that can have an influence on conflict:

  • Lowered barriers to communication
  • Increased speed of information
  • Strategic dynamics and adaptation
  • New data and information

This caught my attention due to the commonalities between this understanding of communication technologies like social media, and how other forms of technology are understood in relation to conflict – specifically, certain tools and weapons used in warfare. To take the example of drone technology; drones are often referred to in relation to conflict using similar terms: potentially lowering barriers to conflict; providing increased speed of information; requiring/instigating strategic or tactical adaptation; providing new data/information. This similarity of ‘effects’ is not really surprising as drones are often used in communicative, data gathering roles; a type of communication technology themselves. However, it got me thinking about the overarching implication of what the commonality between the ways of seeing/understanding these two (very different) types of technology highlights: that increasingly, we are moving towards a landscape in which it is becoming a necessity to view and understand communication technologies such as social media almost as collective ‘systems’ that require similar levels of strategic assessment and understanding as other, more ‘tangible’ weapons of war.

As Zeitzoff notes, the future is likely to bring an increase in social media being harnessed for campaigning, political targeting, the amplification of narratives and an increased coordination between social media and cyber conflict – all of which will be potentially complicated as advances in artificial intelligence make the manipulation of social media easier and more pervasive (p. 1984). This raises some interesting thoughts in relation to our every-day lives and the civilian use of social communication technologies in particular. Specifically, how should we go about understanding and traversing this emerging world in which the social media spaces we inhabit double as conflictual battlegrounds, virtual ‘kill zones’ of political violence in which the general user becomes the prime target during wars of disinformation, perception and narrative? To an extent, this is already very much a reality…but it is crucial to consider how the convergence of AI and cyber in the social media space might give this reality an entirely new dimension.

 Anna Dyson, Lancaster University

Let know what you think in the comments!


Wirtz – Life in the “Gray Zone”: observations for contemporary strategists

This month, we have moved on from considering The City and urban warfare. We are now looking at the changing character of war. A number of people have been talking about this recently, and how the 21st century has brought a sea-change along with it. It isn’t clear whether this is a resurgence of behaviours we have not seen for a long time, or a whole new change. This month, we hope to find out!


Our first article is “Life in the “Gray Zone”: observations for contemporary strategists” by James J. Wirtz in Defense & Security Analysis, 33:2, 106-114. Available here.

The article covers a number different strategies short-of-war which have been termed ‘Gray Zone’ conflicts (or Grey Zone, if you speak UK English). It is a great overview of different types of irregular warfare displayed below the thresholds of armed conflict, and some options to counter these types of non-conflicts.


Here’s what we thought:


This articles investigates the concept of the ‘Gray Zone’ (GZ) – a zone of indeterminacy between peace and war. The author asks whether the GZ is new, and what it really is, before taking the discussion back to the more fundamental question of why these GZ operations are taking place in the first place.

The author suggests that one factor is that there are an increasing number of actors who believe the world can handle a ‘little conflict’ (113). This comes in part (it is implied) by the fact that major actors are increasingly reticent about committing to full-scale military action. Clearly, this is cause for major concern, as while the general public may not be willing to accept military action, it does leave the world in a position where GZ actions are going to become more and more likely. The question from my perspective then is: why are the likes of NATO not doing more? How much longer can we continue to be permissive of so-called ‘minor’ incidents on the global stage?

Mike Ryder, Lancaster University


I, like a lot of people became really interested in grey-zone conflicts, and ‘hybrid warfare’ after the Russia intervention in Crimea and Eastern Ukraine. But, then, as with a lot of other people, I realised it is just another form of irregular conflict. On one level, all of these grey-zone tactics are just recycled from previous conflicts. But, there does seem to be something different about them. As Wirtz notes, the world is now multi-polar with China, Russia, and Iran being prepared to act on the world stage with less fear of a formerly hegemonic US response. However, as Wirtz alludes to, there are vast numbers of individuals and small groups with high-levels of technological power and know-how who have the ability to create change on their adversaries, whether they be other individuals they dislike, or corporations who they believe to be unethical. However, when it comes to these sorts of attacks of states, it does create the question, of whether states will suffer defeat from ‘a thousand cuts’?

It would seem that it is easier to survive a war of attrition if you know that your adversary is also suffering. But, when there are many adversaries, with an unknown and potentially minimal level of suffering happening to them, it becomes more difficult. Wirtz does suggest counter-strategies for these types of conflicts. They all seem to require states to do more, and work harder. I wonder whether the defence cuts engulfing UK armed forces will stop the Ministry of Defence being able to cover these areas?

Overall, Wirtz gives a really good overview of the grey-zone. I just wish I could have read it 5 years ago!

Joshua Hughes, Lancaster University


This article was very useful to learn more about strategy. It was very well structured and easy to read, which is a great accomplishment due to the dense material. I do not know a lot about strategy but have recently started delving more into strategic theory, so this was an excellent addition. However, for someone not versed in strategic theory, not everything was easy to follow. The intended audience of the article is strategists, so the reader is assumed to know more than I do myself. What I still struggle with is why the actions described are placed into a separate category of “short-of-war” strategies, as all these manoeuvres also seem to be found in full-blown war. Proxy warfare might not lead to superpowers taking direct military action, but can very well lead to an extremely violent war in the country in which this takes place. The author claims that multipolarity has led to decreased international management, and therefore control on allies. However, was proxy warfare not a key feature of the bipolar Cold World? Fait accompli seems to be a tactic that can be found in full-blown war as well – take action so quick before the opponent can respond (like the 1998 India-Pakistan Kargil War). I struggle with understanding this logic due to my limited knowledge of strategic theory. Nonetheless it was very informative.

What I found less satisfying were the recommendations to counter Gray Zone strategies. The author recommends accelerating bureaucratic processes, strengthening alliances, and developing tactics to strengthen deterrence against short-of-war tactics. It is not that I disagree, but the whole problem is what to develop and how to execute this. There are few people who do not want to streamline bureaucratic processes (besides perhaps paper salesmen), but how should this be done exactly, and what should be cut and altered? How this process is changed dramatically affects the outcome. Of course it is great to develop initiatives to strengthen deterrence, but what should be developed exactly and how? The recommendations lack substance, and are therefore not very useful practically.

The author finishes off the article by mentioning the possibility of increasing the likelihood of conflict (by redrawing red lines or actually executing deterrent threats sooner) when international decorum is insulted. The idea is that the risk of actual war is lowered, if the threshold of war is lowered too. This is a very thought-provoking suggestion, but as the author already clearly states, a very dangerous one. It is a gamble that the likelihood of war will actually decrease if you employ this doctrine, and escalation risks increase substantially. Furthermore, it also means that opponents might respond harsher to YOUR short-of-war tactics too. The article is written from a US perspective and mostly describes actions taken by China and Russia, but the USA fully embraces this type of warfare too with drone strikes, black OPS missions , cyber operations, etc. That should also be included when calculating the bigger risk picture. Still, if WW II clearly showed how horrible war is and that it must be avoided at all costs, which led to peace, increased cooperation and prosperity in Europe, did that make it worth it? The fact that WW II followed WW I shows that this is no guarantee, and statistically, prior conflict is one of the key factors in predicting future conflict. Nonetheless it is interesting to think about.

Maaike Verbruggen, Vrije Universiteit Brussels

Life in the Gray Zone” presents a clear account of the “short of war” strategies characteristic of Grey Zone conflicts and provides the reader with an understanding of why such strategies may be on the rise. Something that stood out for me throughout this article was the author’s recurring reference to “enablers” or facilitating factors that are seemingly incentivising short of war strategies.

Wirtz notes that the very fabric of deterrence strategies has an enabling effect on short of war strategies by providing adversaries with an opportunity to exploit the “victim’s desire” to avoid hostilities (p. 107). In addition, it is highlighted that globalisation, the information revolution and the pace of technological change also act as enablers for those seeking to alter the status quo. It is, however, a point made towards the end of the article, as Wirtz considers counter measures to Grey Zone strategies, that stands out the most for me in this regard. The author pertinently highlights that the problem of enabling factors incentivising short of war strategies runs much deeper than those factors previously mentioned – to the bureaucratic processes, slow procurement cycles and drawn-out strategic planning timelines within military establishments – and the fact that these are simply not keeping pace with the rapidity of today’s political, technological and social change (p. 112). This asymmetry of pace creates an exploitable gap; a gateway for short of war strategies to be used effectively.

Although it is clear that policies and strategies must be reimagined to align with today’s short of war reality, I wonder how feasible it might be to implement “continuous reform and reinvention” of deterrence strategies, force structures and doctrines, as Wirtz implies (p. 112). This would entail a very reactive approach that is likely to be in constant flux and therefore potentially unsustainable in the long-run. It seems almost counterproductive given the uncertainty inherent to how short of war strategies will evolve as technological, political and social changes continue to accelerate. The more useful way forward seems to be in defining red lines and identifying coherent ways to respond to short of war strategies, as Wirtz later suggests. Defining red lines is surely the logical first step and the one requiring the highest priority in order to effectively begin to counter Grey Zone activities. Until this happens, thresholds of tolerance will remain just as ambiguous as the short of war methods being used to erode them; further incentivising these approaches to be embraced.


Anna Dyson, Lancaster University 

As a novice to the topic, I found the article interesting, clear and informative. The author was able to succinctly define “Gray-Zone” conflicts and elaborate on the three strategies used by those that wish to alter the status quo (fait accompli, proxy warfare, and the exploitation of ambiguous deterrence situations).  I also found that the examples used to present the short-of-war strategies supplemented the theoretical discussion well.

As can be seen clearly in Crimea, “Gray-Zone” conflicts are a serious threat to international peace and security. Therefore, countering “Gray-Zones” is an important consideration for military strategists. Wirtz refers to some courses of action that can deal with the daunting challenges of “Gray-Zones” but only very briefly and without great substance. I would have enjoyed reading more detailed discussion on what can be done to mitigate the challenges posed by “Gray-Zone” conflicts.  Additionally, it would have been beneficial for Wirtz to acknowledge how realistic it is for his suggestions to be adopted by those seeking to counter short-of-war strategies.  If the suggestions are unlikely to be utilised then why is this the case? Why are the relevant actors not already implementing actions to counter short-of-war strategies? Perhaps, actions have been taken or are in the process of being implemented. Considering the seriousness of “Gray-Zone” conflicts, I would have assumed the article would have provided more focus on countering “Gray-Zone” conflicts. Perhaps this will be the focus of future research.

Liam Halewood, Liverpool John Moores University

us know what you think below