Landmines that Walk

Greetings from Geneva, where I am attending the first meeting of the Group of Governmental Experts on Lethal Autonomous Weapon Systems (LAWS) of the Convention on Certain Conventional Weapons (CCW). I am not governmental, and you may judge whether I am an expert, but in any case, here I am.

One of the things you hear over and over here is that we need a “working definition” of LAWS, or AWS if you don’t think it would be OK for robots to Taser people at will, or that we could solve war by having killer robots fight each other and just let us know who won.

In our search for a working definition of AWS or just “autonomous weapons,” we might take a cue from the Ottawa Convention, which bans antipersonnel landmines and defines them as follows:

‘Anti-personnel mine’ means a mine designed to be exploded by the presence, proximity or contact of a person and that will incapacitate, injure or kill one or more persons.

In 2005, the Norwegian Ministry of Finance asked its Advisory Council on Ethics to advise on whether two weapon systems then in development by Textron Systems, the Intelligent Munitions System (IMS) and the Spider networked munitions system, would fall under the Ottawa Convention. Both were intended as replacements for traditional landmines; instead of operating solely as isolated munitions, they would be networked and potentially triggerable either by an operator or by a sensor detecting the presence of a target. IMS was never produced, but Spider M7 is currently in use by the US Army—as a “man-in-the-loop” weapon system, without the “battlefield override switch” that would allow it to operate fully autonomously.

In its recommendation, the Advisory Council, first of all, rejected the notion that classifying these systems as something other than landmines would place them outside the Ottawa definition of landmines:

“The Advisory Council finds that all weapons that are designed to explode because of a person’s inadvertent contact, falls within the definition of an antipersonnel mine…. The point with such weapons is to be able to engage the enemy without being in active combat with them or even being present in the area. The mine is activated by the victim, not by the person that emplaced it.”

Second, the Council found that

“…if the weapons systems in question are going to be equipped with “battlefield override” features, or in other ways designed to circumvent the “man-in-the-loop” feature, they will fall within the scope of the prohibition in the Convention.”

Given that the Ottawa Convention defines mines as things that explode, it clearly cannot be considered to cover autonomous weapons in general, but mines can be considered a subcategory of autonomous weapons. The Convention’s definition of landmines could therefore be generalized. Here is one possible wording:

‘Autonomous weapon’ means a weapon system designed to be triggered by the presence of an object or person or by fulfillment of specified conditions, rather than by a human operator.

Such a framing would require an ancillary definition of “trigger,” but this could be as simple as a time limit. A system that commences violent action within, say, 7 seconds of a human command might be considered to be operator-triggered. An operator could thus be granted a few seconds to rescind an inadvertent or erroneous trigger. However, if an operator gives a command that the system should act at some unspecified future time when a target is detected or located or behaves in a certain way, then it is not triggered by the operator.

The Council’s rejection of the argument that the networked nature of Spider and IMS would make them not landmines stands as a rebuttal to those who argue that networked systems with embedded AI would not be recognizable as autonomous weapons. Regardless of a system’s architecture or geometry, what matters is whether its violent action is triggered by the action of a human operator, or by some other immediate cause.

Likewise, the Council’s finding that Spider and IMS would be exempt from the Convention if it can only be triggered by a human operator and not by the victim, but not if it had an optional autonomous mode, stands as a rebuke to the practice of human-washing autonomous weapons by adding “man-in-the-loop” capabilities while retaining autonomous capabilities.

Starting with the framing of landmines as “victim-activated” and generalizing it to autonomous weapons as “target or condition-triggered” need not mean that everything that meets this definition must be banned. An autonomous weapons convention can provide exemptions for purely defensive anti-munition systems, for some kinds of lock-on-after-launch homing munitions, and for other systems that nations may desire to retain, subject to negotiation in the CCW or another international forum.

Most definitions of autonomous weapons that have been offered reference abstract and unobservable things, such as functions, decisions, and selection of targets. However, arms control usually addresses things that are concrete and and observable. I believe that the definition offered here meets that criterion.

New British killer robot policy II: mandatory two-way data links?

The new UK Joint Doctrine Publication (JDP) on Unmanned Aircraft Systems has been covered in The Verge and other media under headlines like “UK government says humans will always be in charge of its robot weapon systems.”

So here we go, what does “in charge of” mean? Actually this new document repeatedly insists on the precision of its definitions and its own authenticity as a reflection of actual UK policy. So what is it saying so precisely?

The JDP doubles down on the position of its predecessor document, the 2011 Joint Doctrine Note (JDN) now marked on every page with loud announcements of its expired status, that “autonomous” must be distinguished from “automatic” or “automated” and that an “autonomous system” means one “capable of understanding higher level intent and direction.”

UKdocDefining “fully autonomous weapon” as something “with the ability to understand,” and with capabilities that stem from this “understanding,” places autonomous weapons firmly in the realm of artificial intelligence of a kind that so far exists only in science fiction. This makes it easy for the JDP to declare that

“The UK does not possess fully autonomous weapon systems and has no intention of developing them.”

However, while defining autonomy in terms of “understanding,” the JDP does not clarify what “understanding” means, other than that it provides a capability:

“From this understanding and its perception of its environment, such a system is able to take appropriate action to bring about a desired state. It is capable of deciding a course of action, from a number of alternatives, without depending on human oversight and control, although these may still be present.”

We may question what level of artificial intelligence is required in order to begin to provide capabilities that might be described in such language. In fact, existing and emerging missile systems, for example, decide a course of action from a number of alternatives without depending on human oversight and control, which are assumed not to be present. But there is clearly much further to go in AI before equaling or exceeding human capabilities for situational awareness, comprehension and direction of appropriate action in relation to a desired state.

The predecessor document, the 2011 Joint Doctrine Note (JDN), noted that

“…proportionality and distinction would be particularly problematic, as both of these areas are likely to contain elements of ambiguity requiring sophisticated judgement. Such problems are particularly difficult for a machine to solve and would likely require some form of artificial intelligence to be successful. Estimates of when artificial intelligence will be achieved (as opposed to complex and clever automated systems) vary, but the consensus seems to lie between more than 5 years and less than 15 years, with some outliers far later than this.”

Artificial intelligence seems here to have been reified as something human-like at least. If it were as little as “5 years” away (which would be a year ago), one might think it would be something to be concerned about today. However the JDN stated that “Autonomous systems will, in effect, be self-aware” and

“their response to inputs indistinguishable from, or even superior to, that of a manned aircraft. As such, they must be capable of achieving the same level of situational understanding as a human.

The point of setting such a high bar for true autonomy was that it excluded everything anyone is actually doing or planning to do:

“This level of technology is not yet achievable and so, by the definition of autonomy in this JDN, none of the currently fielded or in-development unmanned aircraft platforms can be correctly described as autonomous.”

The new JDP drops the words “self-aware” and “same level of situational understanding as a human” but retains the argument that the “technology is not yet achievable,” so nothing the MoD is doing involves autonomous weapons, case closed:

“Fully autonomous weapons systems as we describe them (machines with the ability to understand higher-level intent, being capable of deciding a course of action without depending on human oversight and control) currently do not exist and are unlikely in the near future.”

Rather, the JDP asserts, on the premise that automation is not autonomy, that

“While some companies and research organisations are trying to develop autonomous systems, the UK’s view is that increasing automation, not autonomy, is required to improve capability.”

Yet it also describes how “automation” evolves into “autonomy”:

“For example, a mission may require a remotely piloted aircraft to carry out surveillance or monitoring of a given area, looking for a particular target type, before reporting contacts to a supervisor when found. A human-authorised subsequent attack would be no different to that by a manned aircraft and would be fully compliant with the LOAC, provided the human believed that, based on the information available, the attack met LOAC requirements and extant rules of engagement.”

In text that was cut from the new version, the JDN continued:

“From this position, it would be only a small technical step to enable an unmanned aircraft to fire a weapon based solely on its own sensors, or shared information, and without recourse to higher, human authority. Provided it could be shown that the controlling system appropriately assessed the LOAC principles (military necessity;humanity; distinction and proportionality) and that ROE were satisfied, this would be entirely legal.”

Here the JDN sought to defend the legality of what would obviously be a fully autonomous hunt-identify-kill mission, with machines even doing LOAC assessments. The new JDP drops this defiant stance and limits the scenario to human-authorized but possibly machine-directed attack, calling this “greater assistance to pilots and operators, and in-system survivability in non-permissive, contested and congested battlespace.”

The JDP concludes that

“the UK does not possess armed autonomous aircraft systems and it has no intention to develop them. The UK Government’s policy is clear that the operation of UK weapons will always be under human control as an absolute guarantee of human oversight, authority and accountability. Whilst weapon systems may operate in automatic modes there is always a person involved in setting appropriate parameters.”

However, it has defined “autonomous” as something that is beyond present technology, that nobody is doing. So its disavowal of autonomous weapons places no restrictions and has no effect on anything the UK is doing or plans to do in “autonomous weapons” as the rest of the world understands the term.

We may be encouraged to know that “there is always a person involved in setting appropriate parameters” but this does not set a high standard for human control.

Nevertheless, the JDP contains some statements which may be encouraging if taken out of context, such as that

“the UK opposes the development of armed autonomous systems.”

and

“The UK does not own, and has no intention of developing, autonomous weapon systems as it wants commanders and politicians to act as the decision makers and to retain responsibility.”

It acknowledges the problem of predictability, and that the acceptance of responsibility by human operators and commanders for using an autonomous weapon system

“has an implicit assumption that a system will continue to behave in a predictable manner after commands are issued; clearly this becomes problematical as systems become more complex and operate for extended periods…. In reality, predictability is likely to be inversely proportional to mission and environmental complexity.”

From this candid observation, the JDP reaches a remarkable conclusion:

“For long-endurance missions engaged in complex scenarios, the authorised entity that holds legal responsibility will be required to exercise some level of supervision throughout. If so, this implies that any fielded system employing weapons will have to maintain a two-way data link between the aircraft and its controlling authority.”

If, as the document repeatedly insists, this represents actual UK policy, the conclusion that maintenance of “a two-way data link” is required for “long-endurance missions engaged in complex scenarios” may come to be of importance, as it would be one concrete standard against which the legality of systems and their use could be judged. However, a footnote weakens the pledge by adding that

“this link may not need to be continuous.”

New British Policy on Killer Robots?

The Guardian reports that the UK will soon issue “a new doctrine designed to calm concerns about the development of killer robots.” While it is beyond doubt that any such “new doctrine” will be designed to calm our concerns, based on the Guardian report it is unclear that anything new has been promised.

The US and UK have been playing the same basic game: posit fully autonomous weapons as a Kantian ideal of human-like intelligence and wanton killing for reasons of their own — something no one ever wanted — and disavow any intention of ever creating such monsters, far beyond current technology anyway. Meanwhile, plow ahead on every front of the robot arms race under the assurance that they are not creating the mythical out-of-control killer robots.

According to the Guardian, the MOD has pledged that “UK policy is that the operation of weapons will always be under control as an absolute guarantee of human oversight, authority and accountability.The UK does not possess fully autonomous weapon systems and has no intention of developing them.”

While placing this in quotes, The Guardian does not explicitly say who it is quoting. The closest match seems to be a letter to Richard Moyes of Article 36 from Paul Larkin of the Counter Proliferation and Arms Control Centre.

That letter reiterates the view that “The UK defines such systems as those capable of understanding higher level intent and direction allied to a sophisticated perception of its environment…” in other words, defining AWS as systems with approximately human-level intelligence, systems everyone will agree don’t exist yet, if ever.

The letter also reiterates that, even with AWS defined as far-off, human-level AI-driven killer robots, “The UK does not support a pre-emptive ban on such systems.” It argues that a ban would “stifle research” into the “legitimate non-lethal advantages to increasingly autonomous technology in the future.” It views “existing International Humanitarian Law [as] sufficient to control and regulate Lethal Autonomous Weapons Systems.”

In any event, if we take the Guardian’s unattributed quote as a statement of policy, there is no indication that it deviates from previous British policy, which like the essentially equivalent American policy, draws attention to mythical killer robots that nobody wants and are beyond current technology, while casting a veil over everything the military is actually doing or might potentially want to do, calling it merely “automatic” or “semi-autonomous” and not of concern.

The Babble out of Buffalo

Tero Karppi, Assistant Professor of Media Studies, SUNY Buffalo

I don’t know why “Killer Robots as cultural techniques,” a recent paper in the International Journal of Cultural Studies, got so much attention, other than that the University put out a press release and it was picked up by Phys.org, the Daily Mail, NewsMax and other outlets, generally with the spin put on it by the press release, that “it’s too late” to stop killer robots, according to “researchers.” I have now seen the paper, and I’m not impressed.

The authors engage in a series of trifling, superficial, pseudo-intellectual observations dressed up in babble and references to the works of other academics you’ve never heard of, plus Marx and Foucault. They affect the superior stance of observers uninvolved in political struggle and indifferent to its outcome, but very concerned that the Campaign to Stop Killer Robots is making “epistemological assumptions or claims” about “distinctions between human and machine” that the enlightened authors can see right through:

The campaign’s distinction between human and machine constructs and utilizes historical imaginaries brought to us by the increase of representations of autonomous automation and the cultural techniques that precede these representations . In this way, the human–machine distinction becomes a step in a recursive process through the embeddedness of cultural imaginaries and representations that form the epistemological foundation of the campaign’s category distinction.

I might be old-fashioned, but I still believe distinguishing between humans and machines is useful and meaningful. And what I really care about is not epistemological correctness but stopping this crazy arms race before it leads to nuclear war.The authors have nothing to say about that, of course. They are just keen to criticize the Campaign and position themselves as more cutting-edge, knowing and hipsterish.

In fact, the article frames itself at the outset as a critique of the Campaign, which is cited no less than 25 times in the text. This highlights Mary Wareham’s question in ZDnet: “We’re not sure why the authors of this report chose not contact the Campaign to Stop Killer Robots as they prepared it, but rather draw from our website.” Maybe because they’d found what they needed to dash off a Culture Studies paper and didn’t want anything to complicate their straw-man arguments.

The authors summarize in the last line of the paper:

Thus, while Killer Robots are an obvious manifestation of a historical imaginary of the 21st century, where automation has the potential to become universally destructive for humanity, they are also products of particular cultural techniques which ‘participate in the formation of subjects, as well as constitute ways of knowing and organizing social reality’ (Parikka, 2015: 30).

What I want to shout at them is, YEAH? SO WHAT? DO YOU SUPPORT THE BAN OR NOT? One might infer (weakly) that they do, unless they prefer universal destruction of humanity. However, it seems that this is of at best secondary concern to them. In the real world, the way they posture and the way it has been reflected in media reports serves only to undermine and weaken the call for meaningful and effective arms control.