UK MoD – Human-Machine Teaming

This week we begin our triple-bill of Joint Concept Notes from the UK Ministry of Defence. You can see them all here. These are basically the documents which lay out how the UK military will develop in the future, and what it’s priorities and aims are over the next few years.

The first Note we are looking at is that focussed upon Human-Machine Teaming, available here. This considers how machines will work alongside people in order to get the best out of both to create the optimum operating environment for military success.

Here’s what we thought:


 

I found this to be a really insightful paper, outlining the MoD’s position on AI and robotics, and in particular, the role of the human in relation to the machine. While there are too many topics covered to address in a short blog response, I found it interesting that the report highlights the potential for technology to shift the balance of power, and to allow minor actors to increasingly punch above their weight. This then ties in with the report’s others comments about use, and the need to adapt quickly to changing demands. In the example of a 2005 chess competition, the paper demonstrates how a team of American amateurs with weak computers won, beating superior players and more powerful computers, demonstrating the importance of the interface between the human and the machine (39–40). While computer power is certainly important, such power used poorly or by unskilled operators is not a guaranteed success, and so we should not take success against ‘weaker’ powers for granted.  

I was also particularly taken by a segment in Annex A at the end of the report in which the authors address the question of autonomy. Here, the report suggests that for the foreseeable future, no machine possesses ethical or legal autonomy (57), within the scope of the report’s own definition. The report then re-states the MoD’s position from September 2017 that ‘we do not operate, and do not plan to develop, any lethal autonomous weapons systems’ (58), which is an interesting remark, given the MoD’s own definition of autonomy as describing ‘elements with agency and independent decision-making power’ (57).  

 Mike Ryder, Lancaster University 

 


This concept note is a great overview of the major issues related to the employment of AI-based technologies alongside humans in conflict situations. Something the note mentions which I hadn’t given much through to is the potential revaluation of state power not in terms of GDP, but in terms of human capital and expertise relating to robotics and AI. Whilst in my work I usually consider AI in weapon systems, that mostly relates to tactical rather than strategic advantage. Whereas considering the impact of AI in a strategic sense is something I haven’t really thought about. As the note says (para.1.9), Russia and Singapore are nations that whilst they have a modest GDP in comparison to other states, have a high level of expertise in the underlying sciences fuelling AI and robotics. This has the potential to really change the way the world works, changing the landscape of power that has dominated the world since WWII. 

Something else which caught my eye was the mention of how manufacturers can limit defence capabilities (para.1.14). By creating systems using certain techniques and methods, they become locked into hat system and might not be open to analysis or further exploitation by the military. In my research on AI in weapons, this can be problematic if the military, in particular when new systems are being tested, want to know what the underlying code does and how it works. Not knowing this can have serious impacts on military effectiveness and legal compliance. 

Whilst the note is focussed upon human-machine teams, something that stood out to me in paras 2.8-2.14 is the large number of tasks that the MoD intends to automate. To me, this seems to be reducing the human role significantly. Perhaps, then, the ultimate goal of human-machine teaming is not to have humans and machines working in symbiotic teams, but to have humans managing large machines teams instead. 

What is quite striking about this report is the similarity it has in vision to papers produced by the US military about network-centric warfare and systems-of-systems approaches to fighting wars in the 1990s. On one level it does seem like the same vision of technological superiority in warfare is just being regurgitated. However, on another, perhaps the visions is in vogue again simply because we are close to having the technologies needed to make it a reality. 

Joshua Hughes, Lancaster University 


What do you think?

4 thoughts on “UK MoD – Human-Machine Teaming

  1. Hi Josh,

    RE: ‘Perhaps, then, the ultimate goal of human-machine teaming is not to have humans and machines working in symbiotic teams, but to have humans managing large machines teams instead. ‘

    I think I agree here. It seems to be the direction things are heading in. It’s not so much a 1-1 team, human to robot, but rather a 1-10 say, or even 1-100 team, with a gradual shift towards fewer humans and more machines. I accept that this is a trend that was already foreshadowed by remotely piloted drones, but I wonder what the minimum human input is?

    I should also add here that it can’t simply be a numbers game. After all, 1-1 teaming with a highly complex robot is very different to 1-1 teaming with a simplistic machine. In this way, it can never be so simple as merely putting a number on it. Do we then set the required minimum human input in inverse proportion to the complexity of the machine? It’s a minefield! Certainly lots to think about!

    Liked by 1 person

  2. Hi Mike,

    in terms of minimum human input and the numbers of people involved, it might surprise you to learn that for a regular combat air patrol of 4 drones, it takes 192 people to man everything (a great graphic showing this here: https://www.radicalphilosophy.com/article/drone-geographies). So this emerging drive towards having 1 person in control of multiple systems would reduce that manpower requirement. In terms of a minimum human input based on numbers of people involved, I think that would depend upon how good the technology is. The more technologically advanced a drone is, the less human input would be required and the fewer people would be required to control the system.

    However, Thompson Chengeta has started to wonder whether we can articulate a legal ‘minimum level of human control’ over autonomous weapon systems (here: https://www.ejiltalk.org/what-level-of-human-control-over-autonomous-weapon-systems-is-required-by-international-law/). It’s something I’m going to be looking into myself in the coming months. This is based upon what actions are legally required? and can the machine perform these tasks? If yes, it can be automated. If no, a human must do it. But, this is still a technology question, ultimately. Eventually, we may reach a stage whereby all legally required actions could be performed by a machine. At that point (although, hopefully before), we would need to ask ourselves what role do we want for humans to retain in warfare?

    Like

  3. Hi Josh, yeah I was aware of the huge amount of human input behind drone operations yes, but I wonder just how many of that 192 are actually in ‘control’, or can/should be held to account. I mean in terms of agency and the attribution of responsibility, it’s down the single pilot, no?

    Like

    1. Hi Mike, I see what you’re getting at now. Legally, everyone is responsible for their actions. So if an intelligence analysis was so incompetent that they identified a perfectly peaceful civilian as a terrorist who should be killed, they could be held responsible for their failures. So, not everything falls upon the pilot. But, the pilots must be aware of all available information that is relevant to their operation and interpret that in good faith. If, for example, a pilot were to receive a poorly reasoned intelligence report identifying person X as a target and then targeted the person based only upon that report, then questions would need to be asked about their competency. Whether it constitutes a crime depends on the particular case.

      Morally, it’s a different story. There’s a few people who talk of moral diffusion through each of these 192 people so that nobody really feels responsible (e.g. https://link.springer.com/article/10.1007/s10676-010-9240-8, or our TTAC21 analysis of the same paper, https://ttac21.net/2017/08/31/the-cubicle-warrior-the-marionette-of-digitalized-warfare-royakkers-and-van-est/). The thing is, this is based upon the idea of the ‘playstation mentality’. Which is a concept suggesting that, because the use of a screen and a joystick has some similarity to playing video games, the pilots must feel that they are playing a game and the people they see are just meaningless targets in that game. However, interviews with actual drone pilots so that it is simply untrue. Rather, there is an ongoing moral engagement (e.g., see https://www.bbc.co.uk/programmes/p02wmp15). This deep personal connection between pilot and target is similar to snipers, both watch their targets for hours if not days waiting for the perfect shot. They begin to understand the target’s life, see them play with their children etc. It’s for this reason that drone pilots have very high levels of PTSD (https://www.npr.org/2017/04/24/525413427/for-drone-pilots-warfare-may-be-remote-but-the-trauma-is-real). So, in opposition to what a number of moral philosophers and ethicists wrote and thought about 10 years ago, it would seem that accountability in both moral and legal terms is present on a deep level in drone operations. Whether that is acted upon and are actually held to be legally and morally responsible by authorities seems to be a different story. Nobody seems to have been held accountable for seemingly egregious errors in target selection and the hundreds of civilians who have been wrongly targeted, at least in public (e.g. https://www.thebureauinvestigates.com/stories/2017-01-17/obamas-covert-drone-war-in-numbers-ten-times-more-strikes-than-bush).

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s