Morgan – The State of Deterrence in International Politics Today

This week we are considering ‘The State of Deterrence in International Politics Today’ by Patrick M. Morgan (2012, Contemporary Security Policy, 33:1, 85-107). It considers how deterrence and deterrence theory has changed since the cold war, and how it could be revived in some ways to deter future conflicts.

Here’s what we thought:


In this long and detailed article published in 2012, the author asks ‘What does deterrence, in theory and practice, look like now?’ As there is just so much content in this article, I thought I’d highlight one particular passage that interests me. In it, the author suggests:

‘there is an alliance among democracies, whether explicit or not, involving a semi-automatic extended deterrence. Numerous adjustments in thinking about security are required to encompass the complications this entails.’ (94)

Naturally, there are several issues with ‘Collective Actor Deterrence’, and the author does explore them. But I wonder, what does everyone think about this notion? Does the concept hold water in 2018? Especially given there seems to be a political reluctance to sufficiently enforce sanctions and threats, leading to a credibility gap between what international State actors say, and what they do.

Mike Ryder, Lancaster University


The article reviews the state of deterrence (anno 2012), in both the academic and policy world, and discusses to what extent it has changed since the Cold War. The fact it was written in 2012 must be kept in mind, as the article is a little bit dated. The security environment has changed substantially to a more “traditional” and state-oriented environment since 2012. I would be curious to see an updated version of the article, and what these developments meant for the author’s conceptualisation of deterrence. I appreciate how the author views the subject of deterrence not merely through realist glasses, as most of the literature does. This allowed for a broad conceptualisation of deterrence and its influences. For instance, his inclusion of non-realist determinants of the ‘national interest’ was a welcome contribution to the literature on deterrence.

I would have appreciated it if the article was a bit more systematic though. Perhaps the article was too short (as it seems like it was an attempt to condense the author’s 2009 book on deterrence in an article), but I missed a sharp definition of what deterrence was; a systematic method to analyse historical changes; and structurally distinguishing between academic research, foreign policy; and the meta-level analysis beyond both policy and academic work on deterrence. Instead the article a non-structured narrative, that makes the analysis seem ad hoc, mentioning different characteristics of the contemporary security environment but staying at such a surface level that nothing really new or meaningful is said.

Because of the non-systematic analysis, the concept of deterrence gets stretched significantly. While I understand that the point of the author is to explain how the nature of deterrence has changed, I feel that if you want to call military intervention to halt human rights violations deterrence, you really need to justify your choices about what deterrence is, why you choose that definition, and why certain behaviour falls under deterrence. Otherwise you risk that the concept of deterrence becomes meaningless. Furthermore, if the point is to describe how

deterrence has developed over time and how the concept has now expanded, the article needs a more in-depth consideration of the historical nature of non-nuclear forms of deterrence, non-superpower deterrence and pre-Cold War deterrence policies. Are the contemporary forms of deterrence truly unique now, or have they always been here, and was the Anglo-Saxon IR literature perhaps preoccupied with nuclear weapons and superpowers with little eye for other forms of deterrence?

The literature on deterrence is so interesting to me, both due to the subject of deterrence, but also on a meta-level, as the academic literature has played such a pivotal role on foreign policy (e.g. Thomas Schelling), and the political views of the authors (from various camps) shine through in their analysis. The author did not really touch the academic and policy interplay significantly, nor debated where the changing attitudes about deterrence come from. This is a shame, especially as deterrence is all about perception, belief, and conventional narratives. It is about convincing an adversary that you are willing them to strike in such a way that it would be foolish for the adversary to attack. But it follows a certain logic, and if an adversary does not believe in that logic, it makes your policy less powerful. So what makes actors believe in that logic or not? What made this paradigm fall out of fashion? And what has changed that that logic is no longer as prevalent, neither in academia nor in policy (anno 2012)? I would have loved to see such meta-level reflections from the author in this paper. Now only how has deterrence changed, but a bit more critical reflection on why it has changed, besides changes in the security environment. However, it is possible that the author expands more on this in their book.

Maaike Verbruggen,Vrije Universiteit Brussel


What does deterrence look like today in both theory and practice? This is the fundamental question Morgan sets out to address throughout the course of this paper. The author draws some useful parallels between pre- and post-Cold War deterrence thinking whilst also highlighting key divergences. Morgan underlines contextual shifts that are shaping contemporary deterrence such as expanding normative constraints on the use of force, the shifting nature of threats and continuous technological change. But contrary to common assertions that such contextual shifts render deterrence inadequate for addressing contemporary security challenges, Morgan sees this as a flawed outlook and moves to highlight that deterrence, rather than becoming inadequate, has become more complex but remains relevant. An important point made here is that deterrence in international politics must be adjusted to accommodate major shifts in the regional and global international systems – but doing so is fraught with challenges. As Morgan puts it: “We are reshaping an important recourse for maintaining international order even as that order is itself being refashioned; we are altering our tools while we build on the run” (p. 86). For me, this echoes the type of dilemmas we are seeing across the board in relation to defence and security issues; where this element of not being able to keep up with the pace of change somewhat cripples our abilities to make meaningful progress in tackling certain challenges.

An interesting point Morgan touches on in this regard is the ability for opponents to design around traditional modes of deterrence (p. 86). The idea of designing around deterrence in order to eschew it seems particularly relevant to today’s security environment as we see

rapidly evolving threats, blurred thresholds of tolerance and hostile grey zone activity by increasingly assertive state actors. Whilst these issues do indeed make deterrence more complex, they also highlight again the vulnerabilities and potential inadequacies of current approaches to deterrence. The rapidity of technological innovation in unison with the types of challenges necessitates fresh thinking on deterrence to bridge vulnerability gaps and mitigate the ability for actors to ‘design around’ deterrence strategies. Deciding what, when and how to deter is constantly becoming more complex as new challenges – often underpinned by technological innovation – emerge. In this sense, it seems as though deterrence thinking/strategies themselves must also become more multifaceted, adaptive and innovative – even hybrid (not dissimilar traits to the threats it seeks to deter) in order to be credible in today’s security environment. I think this is an enormous challenge, not only in terms of understanding, recognising and deciding which of the multidimensional threats we face today would be responsive to deterrence, but also in terms of confronting the remaining inertia surrounding Cold War deterrence thinking in order to move firmly away from a ‘one size fits all’ approach.

Anna Dyson, Lancaster University


I liked Morgan’s paper, and thought it really interesting. I have been thinking about deterrence from an international law perspective for a little while. We usually think of the international law rules on the use of force as being a deterrent as no state really wants to be seen breaking them and be labelled an aggressor. But, we’ve seen a lot of breaches of those rules since 1945, without much damage to any state aggressors. So, perhaps public international law doesn’t have a strong deterrent facet.

However, as a large number of recent conflicts have involved non-state actors, and their wrongful acts are usually dealt with under international criminal law (ICL), I have been wondering whether ICL could have a deterrent effect not in the same terms we see domestic criminal law hopefully deterring criminality, but more in terms of deterring large-scale violence and insurgency. If ICL can deter this, it can essentially deter violent conflicts with non-state actors. Although the threat of prison can deter criminals, violent non-state actors are willing to die for their cause, and so the threat of prison may not impact them quite so much. Hopefully, I’ll get round to carrying out this research.

Joshua Hughes, Lancaster University.


UPDATE: Added 8th April 2019, written earlier

This analysis of contemporary deterrence begins, as one might expect, with its history. The author argues that deterrence is ‘an old practice’—c.f. balance of power politics—but took on a new life in the first half of the 20th century, ‘stimulated by rising apprehension about the growing potential lethality and destructiveness of warfare’, as epitomised by the atom bomb. ‘[W]ith nuclear deterrence as the heart of the major nations’ national security strategies…the need to have it work became overwhelming. We bet our lives, our societies, our civilization (and those of everyone else) on it’, they write. Whilst granting that deterrence was ‘much less successful’ in preventing lesser conflicts, the fact that we’re still here suggests it worked on a macro level, although it’s obviously quite hard to test the hypothesis that nuclear annihiliation would have been avoided with or without deterrence. Since the Cold War, ‘[p]olitical relations among leading states have remained relatively moderate and significantly cooperative, remarkably free of profound security concerns’, a statement that possibly betrays the pre-Trump, pre-Belt and Road, pre-cyberwar vantage point of the article’s publication.

The author writes that whilst ‘[g]reat power conventional forces have…declined considerably’, the US is an exception. Stating that ‘whatever they may say, many governments count of the United States to provide’ international security management, he describes a situation of global dependence to which Trump, with his NATO criticisms and recent decision to withdraw from Syria, is a not-unreasonable response. As for deterrence, the author writes that in a world of ‘weak states, rogue states [and] non-states’, deterrence is now ‘more of a tactical resource…than a security strategy’, and one that ‘often being sought or practised against the West’. The focus on deterrence today is less on ‘retaliatory threats’ in favour of ‘enhanced defences’.

The paper suffers from a slight fixation on the deterrence of ‘kinetic’ weapons (e.g., nuclear), rather than cyber-deterrents. When arguing for the continued relevance of extended deterrence, the author writes that it can ‘also involve projecting deterrence to keep threats geographically far away’, which does not appear to be a particularly timely concern in a world of interconnected networks to attack and home-grown attackers radicalised through social media. The author even seems to be aware of this towards the end, declaring ‘[w]e didn’t see how to readily deter unconventional attacks before and we don’t now’. Six years have apparently produced little progress in that respect.

Ben Goldsworthy, Lancaster University


Let us know what you think in the comments below.

War and technology influence each other. Which has the greatest influence? 

After considering significant changes since WWII last week, this week we are looking at the relationship between war and technology. Both, of course, have been interlinked for years. Military research funding has contributed to many technologies we rely upon today, including the internet which you are reading this on!

Here are our thoughts:

 


War has typically been the biggest instigator of technological progress over the years, in particular with regards to fields of medicine and computing. It is true that certain technologies can potentially influence wars, or how they are fought (e.g. the nuclear warhead, the tank, the bomber, the submarine), but typically these technologies arise as a result of war, and not the other way round. Of course stockpiling masses of these technologies can potentially start a war, but having access to technology is not necessarily the same as putting technology to use.

Mike Ryder, Lancaster University


 

I think, perhaps, up until now, war (or the military-industrial complex, at least) has had a greater influence of technology. Possibly the biggest technological changes of the past century has come out of war, or military funding: nuclear power, and the internet. Indeed, DARPA have played a role in initial research seed funding for many important technologies

But, we see now that technology companies are moving ahead of government funded research. Companies like Apple, Google and Tesla only need to worry about technology, and have made so much money that they can fund enormous research projects beyond the capabilities of military-funded research programmes. I think now there is a change where militaries will be more influenced by technologies, than the influence they can exert on the companies themselves. However, I would think this will only be in relation to how forces communicate and operate. I doubt the influence will extend to military, or even strategic decision-making. As I’ve written about previously, I think the recent open-letters written by AI company heads will have little impact on military thinkgers.

Joshua Hughes, Lancaster University

 


War certainly influences technology to a great extent. They say that necessity is the mother of invention, and defending territory or protecting national interests is often perceived as one of the greatest necessities there is. Military research has led to a number of important inventions, such as the internet, radar, GPS, encryption, advanced computing, key breakthroughs in artificial intelligence, nuclear energy, spaceflight etc. However, their development, adoption and use is not the result of war alone, and many other factors, such as economic interests and civilian inventions also play a key role here. The military did a lot to advance communication technology, but they were not the only one to do so. Furthermore, a lot of technology has always been invented on the civilian side, which is especially true in the 21st century.

Therefore, I would personally say that technology affects war more than it does in reverse. Technology has the power to fundamentally change how wars are fought, which in return, can change how societies are structured. The Hittites were the first known army to have used the chariot, with which they conquered vast sections of the Middle East, which led to the fall of entire kingdoms. The stir-up (with which you can fight standing up) is not to be underestimated, and it has been argued this was the most important factor to the development of a feudal society in Western Europe, as it established the importance of horses and armour, which were only affordable as the nobility. The invention of the longbow in return empowered the infantry, and shifted the balance back to the lower and middle classes. In the future, due to communications technology such as PGMs and potentially autonomy the importance of having actual soldiers on the battlefield might decrease, which could alter the how risk-benefit calculation of war and affect militaristic attitudes in society.

Nonetheless it is important to remember that it is an interplay, and the history of technology and war are interwoven, but also affected by a million other key variables, such as economic factors, civilian inventions, political governance, and societal attitudes.

 

Maaike Verbruggen, Vrije Universiteit Brussels


The relationship of influence between war and technology is intrinsically synergistic on many levels. War can instigate technological innovation out of battlefield necessity, can repurpose or even redefine certain technologies; and in doing so can alter/challenge/broaden our perspectives and understandings of technology itself. Similarly, technology can have the same level of influence on our perspectives and understandings of war, it can spur new or alternative modes/visions of warfare, be enabling to war, productively/disruptively influence strategy and influence the very course of a conflict itself.

The reactive, almost self-perpetuating relationship between war and technology is so intricately entangled that it seems impossible to delineate which might have the greatest influence on the other. I think the influencing relationship between the two is context-dependent and therefore very changeable. However, I am inclined to reason that technology may (at present) be having the greater influence in the seemingly reciprocal relationship between the two. Technology has long exerted influence in war, however, I think what we are seeing today is a set of new, rapidly shifting contexts (and a wider array of domains) in which this influence has the margin to play out. The sheer number of ways in which certain technologies are opening up new avenues for war (or aspects of it), may itself be indicative of the level of influence being exerted.

I think that one of the most prominent ways in which we are seeing this influence play out is through communication technology. Not only are communication technologies such as social media providing new platforms through which conflictual situations might be influenced, but as these virtual spaces/technologies are increasingly harnessed to wage a multitude of wars – of influence, perception, narrative, ideology, propaganda, (mis)information – they are not only potentially influencing war, they are bleeding into it by becoming hosts to certain elements of it. I think in this sense, the uncertain borderland between technology and war is quite fascinating, but it makes it all the more challenging to decide which might hold greater influence over the other.

Anna Dyson, Lancaster University 


What do you think? Let us know in the comments below

What is the most significant change in how wars are fought since WWII? 

Moving on from our discussions about articles this month, we move to our questions to consider. This week we consider What the most significant change in war fighting since WWII is. We come up with a number of different ideas. What do you think? Let us know if the comments below.

Here’s what we thought:


There have been several key technological developments in warfighting since World War II. The helicopter, the stealth bomber, the drone and the ‘smart bomb’ have all had a significant impact on the way we fight wars, while the move to volunteerism (in the West at least) has changed our relationship with the military and the wider sovereign State.

However, I would suggest that the most significant change is in the upsurge in automation and autonomous systems. By this I don’t just mean computer-controlled processes, but the ability to launch weapons with a minimum of human input.

While there still remains a human element in most autonomous weapons systems, the move towards automation, on many different levels, is heralding an ever-increasing ‘distance’ between the Western (human) warfighter and his or her enemy on the battlefield. How much longer before we don’t fight with humans at all?

Mike Ryder, Lancaster University


 

I think the internet has probably made the most significant impact since 1945 on our entire lives, not just war. But the internet, not as a weapon but as what the US military would call a ‘force multiplier’, enables so much more to happen from the military side of things. For example, internet communication between high and low-level personnel, across the globe, and across agencies all at the speed of light is what enables the network-centric warfare and full-spectrum dominance the US aimed for in the nineties. It also enables non-state actors to communicate between themselves hidden from the outside world, also at the speed of light. JSOC in the Iraq War 2003 fought a massive campaign against terrorists and militants who organised their operations via email and instant messaging.

Since writing this question, I have also finished reading Steven Pinkers’ Better Angels of Our Nature, which argues that over centuries humanity is becoming less violence, and the two world wars were temporary reversals. Perhaps, if we want to contribute more to this trend, we should not be thinking of significant trends about fighting wars, but about not fighting wars.

Joshua Hughes, Lancaster University


This is a really large question which is difficult to answer. The most significant for whom? A change in what? In my eyes, the most significant change in global peace and security is most likely the decline of interstate conflict. This means that war is generally not two opposing national armies, but a government fighting non-state actors. This leads to different composition of the army, different strategy, different tactics, different end-goals, different effects on the civilian population, different military technology, different propaganda, etc.

Focusing purely on military technology (considering the topic of this month’s readings and the topic of the reading group), my answer would be the development and proliferation of nuclear weapons (as they were only used in the very end of WW II, so that does not really count ;)). The reasons are too many to count, and many books have been written about it. A tiny selection of the arguments:

First, Mutually Assured Destruction (MAD) makes countries less likely to attack each other directly, which according to some causes, in a perverted way, international stability under certain circumstances, like in the Cold War (see the debate in “The spread of nuclear weapons: An enduring debate” by Scott D. Sagan and Kenneth N. Waltz). However, MAD also has increased the frequency of proxy wars, under which especially non-Western countries suffered. Second, Risk and escalation are taken more seriously due to the threat that humanity might be wiped out. This also gave rise to the deterrence doctrine which changed how countries approach war. Third, the relationship between Nuclear-Weapon States and Non-Nuclear-Weapon States becomes fundamentally unequal and alters the strategic and political balance on a fundamental level. It alters the importance of previously key concepts such as geopolitics and mass: You will never have enough soldiers to defend against a nuclear weapon. Hence the interest in nuclear weapons by Israel, the DPRK and Pakistan. Relatedly, it was a major push for the establishment of international organisations and arms control treaties. The first resolution adopted by the UN in 1946 aimed at developing mechanisms to control nuclear weapons better.

 

Maaike Verbruggen, Vrije Universiteit Brussels


Let us know what you think in the comments below.

How is modern warfare shaping what is required of fighters? Will other requirements be made of them in future conflicts? 

The second of our super-solders posts considers the demands modern war places upon today’s soldiers, rather than really thinking of human-enhancement. The regular deployments and increased bureaucracy make it a great change for a few decades ago where many soldiers never saw any real action, nor had to deal with a myriad of other governmental agencies whilst deployed. Basic soldering appears to be getting harder, as soldiers themselves may be getting weaker – we see applications and pass rates for special forces selection dropping in recent years. As our understanding of the impact of operations expand, are we asking too much of the young men and women we ask to fight for us? And can we guess at how this will play out in the future?

 

Here’s what we thought:


One of the most telling shifts in recent years in my mind has been the ever increasing surveillance surrounding military operations. ‘Kill cams’ and the like have of course been around for some time in the armoured sections of the military, but more specifically here I refer to the way surveillance is now also being used for the men and women on the ground. For me, this opens up a whole raft of problems in terms of accountability and responsibility in warfare, and strikes me also as a major shift towards the ‘robotisation’ of the armed forces. If a soldier can no longer act free from reprisal (or retrospective reprisal) for even the smallest of actions, then why send a human at all, when a machine will be far more effective?

But robots themselves come with their own problems and associated risks. As the 20th century has taught us, it is not good enough to merely shoot or bomb an ‘enemy’ into submission: we must consider the ‘hearts and minds’ of the populace. And quite simply a robot is not in a position to fulfil this role. I wonder then if, long term, the human takes on more of a humanitarian role, while the fighting is left to the machines.

Mike Ryder, Lancaster University


It would seem that the most prevalent trend in modern warfare is operational tempo constantly increasing. We have seen the near-constant deployment of Western special forces since 9/11. The fact that the US have just re-committed to Afghanistan means that there is no chance of the ‘perpetual war on terror’ abating. Thus, it would seem that modern warfare is going to require fighters to fight on a continual basis, with much faster turn-around times between operations than previously. The days of Western nations waiting for the Eastern Bloc to come crashing through Germany are long gone. Indeed, with a terrorist enemy that is capable of attacking anytime, anywhere, it would seem that Western militaries must also be prepared to fight anytime and anywhere.

The strain on the family life of such fighters must be immense. Indeed, we can see in autobiographies of former SAS men that many marriages and family relationships simply fall apart when the soldier in the family is deployed to the other side of the world with only a few hours notice. So, it would seem that the military will require fighters to be totally committed to the causes they are fighting for, rather than their families or themselves. This is, of course totally the opposite of the trend towards providing worked with a greater work-life balance in order to actually be more productive.

Requirements of future conflicts are likely to ask more of soldiers during operations. We already know that counter-terrorism and counter-insurgency operations in urban environments are some of the most cognitively difficult roles soldiers can be asked to do. Yet, with the likely rise of city states in the near future, they could be asked to operate in such environments regularly. Distinguishing who is friend or foe in todays conflicts is difficult, and we regularly see urban police mistake innocent people for armed criminals (particularly in the US), imagine the difficulty when both these issues are essentially combined in military operations in a failed city-state. Difficulty could be further added due to the reducing size of Western militaries. What if NATO countries transform their militaries into small but highly capable forces, in effect large quasi-special force? Small teams in failed city-states will likely have to fend for themselves if there is not a large enough force able to save them. Stories like Black Hawk Down may become far more regular for Western publics to tolerate.

Joshua Hughes, Lancaster University.


What do you think?

Sawin – Creating Super Soldiers for Warfare: A Look into the Laws of War

This month we’ve decided to embark on some ‘themes’, where papers and questions we will consider (and you are welcome to join in) are on similar topics. The first we are looking at is the issues of super-solders, or military personnel with enhancements which may be biological. making them super-human, or mechanical by putting them in exoskeletons (like Iron Man), to make them stronger, tougher, more resilient, and able to complete missions and tasks quicker and more efficiently.

Our first consideration is ‘Creating super soldiers for warfare: A look into the laws of war’ by Christopher E. Sawin (17 J. High Tech. L. 105 2016). The article considers whether super-soldiers could ever be deployed in compliance with the law of armed conflict.

It’s available here.

Here’s what we thought:


In modern warfare, according to Sawin, there is a focus on abiding by lawful rules and limiting violence. Therefore, soldiers have to show restrain and be more selective in their fulfilment of military objectives.  The most common form of contemporary warfare is asymmetric warfare, which makes restraint and selectiveness even more important as soldiers are often faced with enemies that do not wear distinctive uniform and are able to blend in and out of civilian life at ease (so-called farmers by day soldiers by night).  Arguably, one of the most important requirements of modern soldiers is accurate decision-making.

Sawin postulates that future wars will become harsher and that the use of human enhancement technology to support the capability of soldiers to deal with harsher demands makes sense.  Human enhancement technology has the potential to provide many benefits such as increased awareness, intelligence and health. These benefits would be beneficial to soldiers in all circumstances but other benefits of human enhancement are more particular. For example, improving the speed, stamina and strength of soldiers is only likely to be of benefit when the soldiers are in close proximity to their enemy. As technology has advanced and political will for deploying soldiers has decreased, the trend in modern asymmetric warfare is to conduct operations against enemies from afar, such as with drones, which enables the killing of the enemy without the State endangering its own personnel. If this trend continues then so called super soldiers may not be determinative of which country has the elite fighting force, as suggested by Sewin.

Liam Halewood, Liverpool University


As a relative outsider to the field of law, I do find it quite astonishing sometimes just how ‘alien’ human law can seem to anyone who has experience working in other academic disciplines that are far more comfortable with future gazing and engaging with existential issues.

As the author here admits, the idea of enhancing the performance of soldiers has been around for a very long time. I find it strange then that the author raises the possibility that super soldiers may no longer resemble human beings (117) – as if this were a new problem, when these questions have existed for decades, if not centuries in other academic disciplines. I wonder then perhaps if this is a problem with law both as a discipline, and as an institution: its focus is far too insular, for it only considers the law-as-written and thus sees the world from a very distorted perspective.

To return then to some of the issues raised directly in this article, the most eye-raising from my own perspective is the question of whether supersoldiers are ‘inhumane’ weapons. This strikes me as somewhat strange given that asymmetry is essentially the primary aim of warfare: i.e. defeat the enemy as quickly and effectively as possible with minimum harm or damaged caused to one’s own. Is it really ‘inhumane’ to send in super soldiers to fight ‘normal’ soldiers when we already have a whole arsenal of weapons and technologies available to us that the ‘enemy’ doesn’t have access to? (This reminds me of the Second Italo-Ethiopian War where the Italians sent in tanks against an enemy, many of whom were armed with spears and/or bows and arrows). This leads to my second question: ‘inhumane’ for who?

Perhaps more widely here I think, the issue seems to be less about the ‘inhumanity’ of using supersoldiers, but the ‘un-humanity’ of using them – the way they represent an overt shift in the nature of ‘human’ warfare to something that goes beyond what we in modern day parlance come to understand as what it is to be human. And yet again, this is essentially nothing new – though it would appear to be so from a legal perspective. Is it not time then that law caught up with the rest of us?

Mike Ryder, Lancaster University


Sawin’s article seems a little odd to me. The whole premise is that super-soldiers with enhanced abilities would be ‘stronger, faster, tougher, better trained, and more durable’, and in doing so would be less empathetic and emotionless. This, apparently, would mean that despite having enhanced abilities, super-soldiers would be less-able to recognise civilians, and therefore would put them at greater risk. This seems ridiculous to me, as the law of armed conflict is a requirement for all military personnel to learn. Why then would personnel with enhanced abilities suddenly forget an essential part of their training? They would not. In fact, arguable a super-soldier with enhanced eye-sight and quicker cognitive abilities may be able to offer greater protection to civilians. For example, if carrying out a night-raid on a known terrorist house, a scared nineteen-year-old private may be unlikely to give anything that moves in the darkness much chance. A super-soldier may be able to see and recognise civilian presence quicker, resulting in not discharging their weapon and sparing a life that would otherwise have been collateral damage.

The article also questions whether super-soldiers would be banned under Art.35(2) of Additional Protocol 1, which states: ‘It is prohibited to employ weapons, projectiles and material and methods of warfare of a nature to cause superfluous injury or unnecessary suffering.’ Again, the premise for asking this question is again that somehow enhanced soldiers would have less ability to protect civilians. This is aside from the fact that prohibiting super-soldiers under this provision would require them to be reclassified as a weapon, which they would not be. Super-soldiers would be using the same weapons as ordinary soldiers (unless spectacularly heavy, for example), as so would not necessarily impart superfluous injury or unnecessary suffering any more than a non-enhanced soldier. Some would argue that better eye-sight and heart-rate control could make them more accurate over distance. But, that would also mean that current soldiers wearing corrective lenses, or having received laser eye-surgery also impart superfluous injury or unnecessary suffering. The only reason I could think of where super-soldiers would impart more force on a particular individual than conventional soldiers would be in hand-to-hand combat. A conventional and super-soldier firing the same weapon at the same person would impart the same damage, but in hands on fighting, an enhanced soldier with superhuman strength and endurance could wipe the floor with a conventional enemy. But, that ignores the fact that an enhanced soldier could still stop when the enemy has been beaten, and take them prisoner.

The general principles of the law of armed conflict require fighters to protect civilians as much as possible. The fact that future fighters may have enhancements does not mean that these protections would be in danger. In fact, quicker decision making and sensory abilities could recognise civilian presence sooner, or target munitions more accurately, and offer greater civilian protection.

Joshua Hughes, Lancaster University


First, the content of the paper was subpar. The author does not seem to understand what super soldiers or military human enhancement entails, and frequently confuses it with autonomous weapon systems. His vision of military human enhancement mainly seems to be based on science fiction comics, with no investigation into the current state of research and what governments are actually interested in and developing. He does not problematize or define the concepts he discusses, while human enhancement is very vague and ill-defined, and what falls under it is subject of intense debate. There are also so many different types of enhancements, some of which would be regulated by the Geneva Convention and some would not, with so many different effects, that you cannot generalize in the manner the author does to determine their legality.

Secondly, I am not a lawyer, but I do not understand his choice for exclusive focusing on Article 35 of Additional Protocol II, while there are so many more relevant principles at play, such as the principles of protection or distinction, as well as other legal instruments regulating the use of weapons such as the UN CCW. He barely problematizes the principles he actually discusses and does not present the multiple ways they can be applied, such as whether the SiRuS principle should be applied to weapons “of a nature to cause” or “calculated to cause”, as different countries interpret this principle differently and this could affect the legality substantially. His knowledge on the use of military technology and its role in warfare seems to be limited, and he dramatically simplifies concepts all the time ignoring the substantial discussion around them (e.g. when he says that the concept of informed consent does not apply for soldiers. Soldiers have that right, it is just difficult to say when they are free to consent or not, due to the hierarchical structure of the army).

On a final note, the way the article is written is problematic. He frequently cites conspiracy websites; the tone is heavily sensationalized (e.g. when he describes all unmanned systems in use by the US army between 2002 and 2010 as “enhanced war-fighting machines”, while the majority of these are very simple remote-controlled bomb disposal robots); and when he cites references to support his claims that actually argue very different things (e.g. when he says that Lin et al claim that “military soldiers are the one aspect that can determine the fate of warfare”, while they actually say on that specific page that “as impressive as our weapon systems may be, one of the weakest links—as well as the most valuable—in armed conflicts continues to be warfighters themselves”, which is something very different). Finally, the historical examples he brings up are often factually incorrect, for instance when he describes the Thirty Year War, and they have little to no relevance. These aspects do not reinforce trust in the message of the article.

Maaike Verbruggen, Stockholm International Peace Research Institute


Let us know what you think!

Wired for War – Singer

Wired for War by P.W. Singer (2009, Penguin books) is one of the most important books in security circles over the past 10 years. It is a milestone account of the state of technology up to 2009 and considered so many things that were at the cutting edge of innovation, that many of them still haven’t happened yet.

 

Without further ado, here are our thoughts:


Wired for War is an excellent book, and a comprehensive introduction to the impact of technology, and specifically robotics, in modern warfare. Some questions that arise from the text:

  1. iRobot’s mission statement (27) is somewhat disturbing, given their military links. Do we run the risk here of creating too much of a psychological distance between a product and its function?*
  1. It is interesting to note that books such as Starship Troopers and Ender’s Game appear in so many professional reading programs and military training courses (see 151 and 156). What are the potential ramifications of reading such books from a military perspective?
  1. What role does fiction and the arts have in the widespread acceptance of technology and military strategy? As with Q1., do we run the risk of ‘dumbing down’ the ethical, moral and social implications of these technologies? Too much emphasis on the rule of cool and ‘crash, bang, whallop’ and not enough intellectual engagement?
  1. The author suggests that robots can potentially reduce the instances of war crimes (393–408). But, with machine learning, will this remain the case? What about robots used on the other side? Will robots place equal value on the lives of friends and enemies alike?

Mike Ryder, Lancaster University

*Interesting related article: Dennis Hayes, ‘The Cloistered Work-Place: Military Electronics Workers Obey and Ignore’ in Cyborg Worlds: The Military Information Society, ed. by Les Levidow and Kevin Robins (London: Free Association Books, 1989).


 

Wired for War is probably one of, if not the, most important book for scholars of military technologies. Because this book was so forward thinking, and considered things really on the extremes of technological capability when it was written, many of the things that were prophesied as coming soon still haven’t appeared.

I wanted to draw attention to a passage in the book (272-276) which considers how artificial intelligence could be used to predict the incidents of terrorist attacks and other crimes. If the preparatory activities for attacks/crimes can be subject to data pattern analysis, there is a question here as to what to do with this information?

Arresting somebody for conspiracy to commit a crime of attack seems preferable, but the evidence is often difficult to bring together when the intended crime has not yet taken place. Deterring would-be criminals from carrying out the crime is another option. In a similar way to placing police cars outside banks when tipped-off of a possible robbery, an increased police presence at a site of planned crime/attack can have a deterring effect. Yet, we now commonly see determined and highly motivated terrorists carry out their actions despite knowing that they will suffer either arrest or a bullet from an armed response officer. It is certainly a difficult issue.

Recently, UKIP (a far-right UK political party) suggested the internment of terrorist suspects without trial. Yet, we know from experience with the IRA and al-Qaeda that such treatment can be used by those groups as a massive recruiting tool. It seems that the other option down this route of taking action before an actual attack happens is killing the potential attacker as happens with US/Israeli/UK/Russian targeted killings. If we think about the potential future implication of AI systems performing statistical analysis to essentially state that an individual is about to commit a terrorist atrocity, this could then result in a strike from an autonomous drone. We are well into the territory of a worrisome future with this. But, it is possible. This certainly raises questions about how much ‘meaningful human control’ society wants in its counterterrorism.

Joshua Hughes, Lancaster University


 

As always, see the contact tab if you want to join the network.