Last month we had our first face-to-face reading group for TTAC21. It was held in the Green Lane Conference Centre at Lancaster University. In attendance, we had Joshua Hughes (Int. Law) and Mike Ryder (Philosophy) from Lancaster University, Jasmin Nessa (Int. Law) from Liverpool University and Liam Halewood (Int. Law) from Liverpool John Moores University. We were also joined on Skype by Milton Meza Rivas from Universitat de Barcelona, and Maaike Verbruggen from the Stockholm International Peace Research Institute.
We discussed a number of papers, please see past posts for the whole list. Below is a number of talking points that we pulled out, and questions that we posed whilst discussing each paper.
The paper is available here.
- Unwilling and unable doctrine seems to be the result of more powerful states dominating the evolution of international law, and penalises weaker states. As such, the doctrine seems to only be used where it can further policy aims. For example, it has been used to justify military action in Syria, but not Belgium where the authorities were also unable to stop terrorist activities. However, it seems that it is not only the potential for a state to regain control but also the strategic political impact of an intervention – No nation is likely to risk upending the global political system by intervening in Belgium. As Marko Milanovic has said, ‘law without policy is pointless’.
- Unwilling and unable doctrine seems to be another outbreak of American exceptionalism, following on from the actions of self-defence against terrorists in Afghanistan.
- Adding to the idea that the doctrine is a vehicle for American policy, the intervention of Russia in the Syrian civil war would suggest that the Syrian state IS willing, and IS able (with help) to combat the threat of Daesh, and the justifications for US involvement should have evaporated.
- There is also an issue of how quickly a state should act against terrorist to be classified as ‘willing’ and ‘able’. For example, if a state acted straight away when asked by another state to deal with a terrorist group they would meet both criteria. However, if, due to difficulties with the mobilisation of military forces, the territorial state needed to wait a week or a month, would they still be willing and able, if the terrorist threat is going to be realised in 3 days?
- Although unwilling and unable have been put together as justifications for extraterritorial uses of force, they do not embody similar characteristics of the territorial state, and therefore should be separated with different tests.
- In terms of a state being unable to combat a terrorist threat, what are the criteria? It seems that the territorial state having a lack of effective control over the territorial area in which terrorists operate is the sole criteria.
- The nature of the unable strand of the doctrine uses an assumption that the state has effective control over its people and territory, and that control is functional and can, therefore, be lost. However, control is culturally dependent. What counts as control in a country with significant areas of wilderness would not count in a largely urban country.
- The justifications given by states often do not use the words ‘unwilling’ and ‘unable’. It seems that whilst states favour the ability to use force for these reasons, they do not want to be seen as endorsing US exceptionalism.
- The answer for reforming unwilling and unable doctrine seems to lie in greater regulation, with the inclusion of small states. It is difficult to argue morally that a state should not be allowed to deal with threats to its people, wherever they are. But, the current usage by Western states devalues this notion to a policy vehicle.
- It seems that whilst the UN Charter scheme on the use of force is clear, and the rulings of the ICJ present a clear framework for when extraterritorial force is allowed, the customary international law additional to these factors is very unclear and creates difficulties not only for understanding, but justifying extraterritorial action.
- There also seems to be an assumption that the loss of control equates to a loss of sovereignty, and that a military action under this doctrine would not affect sovereignty if there is no territorial state control.
- Does the requirement for effective control extend to the citizens of a state in cyberspace? Could an attack under unwilling/unable doctrine be launched in cyberspace?
- The European Convention on Human Rights is the most effective instrument to curtail actions under unwilling/unable doctrine. The International Covenant on Cultural and Political Rights is not as effective, as evidenced by the US actions whilst being party to it.
- In terms of the extraterritorial applicability of the ECHR to drone strikes, there is an argument pt forward by states that if an arrest is impossible, they cannot be in control of that person.
- However, does the adding of a person to a ‘kill list’, or tracking them and keeping them under surveillance for a significant amount of time result in control?
- If drone strikes are security operations, rather than specific military operations, does this mean they are law enforcement? Consequently, does this create a situation whereby the state acting extraterritorially is exercising public powers, therefore resulting in the application of the ECHR?
- This discussion seemed to be summed up well by the adapted adage ‘law is politics by other means.’
The paper is available here.
- Harris suggests that the legal issues associated with autonomous weapon systems are not insurmountable, but a significant issue is whether the law can be ‘translated’ into algorithms that work. For example, it is easy to programme a system with ‘IF civilian, THEN do not fire’. But, the difficulty is getting a system to understand what a civilian is.
- The obsession with individual responsibility seems to be ill-founded. There are other avenues of responsibility that are available, such as state responsibility. The idea that a single individual must always be held accountable seems to be a moral desire, rather than a legal one.
- There could be issues with autonomous weapon systems applying the law as it is, with no moral augmentation. For example, a human soldier could take a sleeping enemy prisoner, but an autonomous weapon system may just kill them because it is legally permitted to do so.
- But, if people don’t abide by the law 100%, why do we expect machine too?
- As the precautionary principle also covers a militaries own forces, it seems strange for them not to back these technologies if they allow power to be projects and force to be used without danger to your own personnel.
- The current legal framework seems to be sufficient to deal with present day drone strikes, but falls into difficulty regarding autonomous weapon systems, and biotechnologies.
- 20 years ago, weapons were just ‘tools’. Now, it seems they have become so crucial to current methods of killing that they have become ‘members’ of the armed forces.
- Are these technologies and capabilities leading us down modes of thinking we wouldn’t otherwise consider?
- Are human operators of current drones, and future weapon systems merely a ‘moral alibi’?
- Autonomous weapon systems are likely to be great at killing a defined enemy, but how will they cope in counterinsurgency operations, which are becoming more common these days?
- It seems that autonomous weapon systems will likely be best at specific tasks, rather than as a replacement for general soldiering.
- Will autonomy make the military experts in technology, rather than fighting?
- Deep learning, where the programmers do not know how a system has been programmed is very problematic for compliance with the law.
- Systems programmed by machine learning can become bias if trained on bias data.
- Systems would be unable to consider moral actions if they have not been included in its programming.
- It seems that having humans ‘in’ or ‘out of’ the loop makes legal compliance simpler, relative to if a human were to be ‘on’ the loop of autonomous weapon system decision-making.
- There is an assumption in anti-autonomous weapon system campaigning that human beings follow the law perfectly.
- Functional autonomy (where a human just blindly follows the suggestion of a machine, thereby creating an autonomous system)is very dangerous. It situations where a human is needed to act as an operator, they do so because the system is incapable of acting legally without them, so to create functional autonomy is to implement an inadequate system.
- The making of snap decisions, with or without machine input, generally gives poor outcomes. Therefore, meaningful human control in decision-making becomes key.
The paper is available here.
- There was a general feeling that the article was poorly referenced. Although it may be more to do with the style of the journal, the lack of references and alternative viewpoints could give a reader the impression that this field is under-researched, and that Gregory’s word is, essentially, final.
- Although the ‘borderlands’ concept is interesting, it does not really suggest anything new. Everybody knows war is no longer restricted to battlefields, although readers from other disciplines may not be aware of the term ‘battlespace’ for a multi-dimensional conflict.
- How would the employment of a private military contractor by drug barons affect the ‘Amexicana’ area?
- The linking of drug barons and terrorists seems odd. Drug dealers rely upon civilians for their customers, whereas terrorists use them as targets.
- The use of war language in security (war on terror), law enforcement (war on drugs), health (war on obesity), and other areas of policy (war on want) results in a devaluation of the concept of war. This reduces its exceptional quality, and turns it into the norm. Peace has become the exception.
- Issues with opium farmers inAfghanistann were linked back to autonomous weapon systems. Programming systems with the law would result in them being unable to ‘turn a blind eye’ to opium production, and enable this to help in winning hearts and minds.
- Cyberspace was not really seen by the group as a borderland. It is either conceptualised as a battlefield or as a domain, there is no real conception of a mix of the two.
- Cyber-terrorism is a very undefined area. There is a blurred definition of cyber attack, alongside a blurred definition of terrorism.
- Cyber-security is dependent on politics, as the government does not want to spend money on things that are not immediately important. It is also dependent upon good practice becoming embedded in every member of every organisation that is vulnerable to attack.
The paper is available here.
- These terrible situations described in this article are a damning assessment of the time Theresa May spent as Home Secretary with responsibility for this strategy. The politics of this strategy seem to have been poorly thought through.
- Some in the group thought that the title seemed a bit ‘cheap’ to try and encourage readers.
- The current conception of terrorism in the West is based on violent Islamic extremism. But, far-right terrorists are still present, still dangerous, and are likely to be able to recruit more people.
- the profiling used in the Prevent Strategy results in ‘carpet bombing’ for 1 person. But, the expectation that Islamic terrorists will be found creates leeway to allow for discrimination. The discriminatory results create an unfounded fear amongst non-Muslims.
- The targeting of certain communities is understandable but needs to be done in a way that builds trust.
- The Prevent strategy reflects that forgotten history of anti-Irish feeling during the height of activity by the IRA.
- The conversion of community police officers into intelligence sources doesn’t happen in other areas. For example, a Family Liason Officer assigned to a victim’s family after they have been murder is not to be used as intelligence sources.
- The type of policing required to find and deal with terrorists in the community relies on trust. Yet, the Prevent strategy undermines that trust. But, the lack of trust cannot be seen in isolation following on from Hillborugh, the Rochdale grooming scandal, and undercover police forming a relationship with those they are supposed to be spying on.
- The use of technology to find terrorists could be useful. But, could also result in pre-crime action, similar to the film Minority Report. This could be seen as linking in with ‘signature strikes’ carried out by drones where an individual is targeted based on presenting ‘signature patterns of life of a terrorist’, rather than an actual criminal act.
- There needs to be a balance struck between what action are taken, and what society deems is acceptable. For example. ‘random’ searches at airports are not usually seen as overly intrusive.
- It is important to remember that the ECtHR accepts that intelligence is unlikely to be 100% accurate before an operation or action is launched (Jean Charles de Menezes).
- An anecdote about a talk given by Charles Clarke (former UK Home Secretary) was given, where he suggested that targeted killing presented the benefits of being able to eliminate Hitler in the 1930’s, but also the negative aspects of a miscarriage of justice by execution. Could this dichotomy also be applied to pre-crime, rather than just pre-terror attack?
- The lack of accountability of using algorithms to analyse potential terrorists, along with an expected lack of transparency with what criteria are assessed is likely to lead to dissatisfaction amongst the public with such a system.
Well, that is what we discussed, and these are most of the thoughts and questions that we posed. If you have comments, please feel free to pop them in the box below.
If you are interested in coming along to the next face-to-face reading group (likely to be in Liverpool), just click on the contact tab and send an email expressing your interest.
Our in person attendees: Mike Ryder, Joshua Hughes, Liam Halewood and Jasmin Nessa. Milton Mezas-Rias and Maaike Verburggen attended via Skype.