Landmines that Walk

Greetings from Geneva, where I am attending the first meeting of the Group of Governmental Experts on Lethal Autonomous Weapon Systems (LAWS) of the Convention on Certain Conventional Weapons (CCW). I am not governmental, and you may judge whether I am an expert, but in any case, here I am.

One of the things you hear over and over here is that we need a “working definition” of LAWS, or AWS if you don’t think it would be OK for robots to Taser people at will, or that we could solve war by having killer robots fight each other and just let us know who won.

In our search for a working definition of AWS or just “autonomous weapons,” we might take a cue from the Ottawa Convention, which bans antipersonnel landmines and defines them as follows:

‘Anti-personnel mine’ means a mine designed to be exploded by the presence, proximity or contact of a person and that will incapacitate, injure or kill one or more persons.

In 2005, the Norwegian Ministry of Finance asked its Advisory Council on Ethics to advise on whether two weapon systems then in development by Textron Systems, the Intelligent Munitions System (IMS) and the Spider networked munitions system, would fall under the Ottawa Convention. Both were intended as replacements for traditional landmines; instead of operating solely as isolated munitions, they would be networked and potentially triggerable either by an operator or by a sensor detecting the presence of a target. IMS was never produced, but Spider M7 is currently in use by the US Army—as a “man-in-the-loop” weapon system, without the “battlefield override switch” that would allow it to operate fully autonomously.

In its recommendation, the Advisory Council, first of all, rejected the notion that classifying these systems as something other than landmines would place them outside the Ottawa definition of landmines:

“The Advisory Council finds that all weapons that are designed to explode because of a person’s inadvertent contact, falls within the definition of an antipersonnel mine…. The point with such weapons is to be able to engage the enemy without being in active combat with them or even being present in the area. The mine is activated by the victim, not by the person that emplaced it.”

Second, the Council found that

“…if the weapons systems in question are going to be equipped with “battlefield override” features, or in other ways designed to circumvent the “man-in-the-loop” feature, they will fall within the scope of the prohibition in the Convention.”

Given that the Ottawa Convention defines mines as things that explode, it clearly cannot be considered to cover autonomous weapons in general, but mines can be considered a subcategory of autonomous weapons. The Convention’s definition of landmines could therefore be generalized. Here is one possible wording:

‘Autonomous weapon’ means a weapon system designed to be triggered by the presence of an object or person or by fulfillment of specified conditions, rather than by a human operator.

Such a framing would require an ancillary definition of “trigger,” but this could be as simple as a time limit. A system that commences violent action within, say, 7 seconds of a human command might be considered to be operator-triggered. An operator could thus be granted a few seconds to rescind an inadvertent or erroneous trigger. However, if an operator gives a command that the system should act at some unspecified future time when a target is detected or located or behaves in a certain way, then it is not triggered by the operator.

The Council’s rejection of the argument that the networked nature of Spider and IMS would make them not landmines stands as a rebuttal to those who argue that networked systems with embedded AI would not be recognizable as autonomous weapons. Regardless of a system’s architecture or geometry, what matters is whether its violent action is triggered by the action of a human operator, or by some other immediate cause.

Likewise, the Council’s finding that Spider and IMS would be exempt from the Convention if it can only be triggered by a human operator and not by the victim, but not if it had an optional autonomous mode, stands as a rebuke to the practice of human-washing autonomous weapons by adding “man-in-the-loop” capabilities while retaining autonomous capabilities.

Starting with the framing of landmines as “victim-activated” and generalizing it to autonomous weapons as “target or condition-triggered” need not mean that everything that meets this definition must be banned. An autonomous weapons convention can provide exemptions for purely defensive anti-munition systems, for some kinds of lock-on-after-launch homing munitions, and for other systems that nations may desire to retain, subject to negotiation in the CCW or another international forum.

Most definitions of autonomous weapons that have been offered reference abstract and unobservable things, such as functions, decisions, and selection of targets. However, arms control usually addresses things that are concrete and and observable. I believe that the definition offered here meets that criterion.

Leave a Reply

Your email address will not be published. Required fields are marked *