The Future of Life Institute has posted on its website an “Open Letter from AI & Robotics Researchers” calling for “a ban on offensive autonomous weapons beyond meaningful human control.” It’s generally an excellent letter. I’ve added my signature, and urge you to add your own to the list that is already hundreds long and growing rapidly – you don’t have to identify as an “AI & robotics researcher” although most of the signatories do.
The wording of the letter does raise some questions that its authors may have found convenient to avoid, but which must be addressed as we move toward the realization of a ban on autonomous weapon systems (AWS). To define the issue it is addressing, the letter states that:
Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions.
The first part of this is adapted from the language of Department of Defense Directive 3000.09, which, as I’ve written here and in the Bulletin of the Atomic Scientists, hides the entire issue of what constitutes human control behind the ambiguity of its non-definition of “target selection.” The second part brackets this ambiguity but is somewhat ambiguous itself. Some of the questions that remain unanswered here include:
- Does the category of autonomous weapons always include systems that “search for and eliminate people,” or only if “certain pre-defined criteria” and not other kinds of criteria are used? In particular, if the criterion was identity, i.e. that the person was a known individual that humans had already made the decision to kill, and if a robot could identify that person with high reliability, should that be allowed? What if the target was not a person, but a particular object, or any object of a certain kind?
- Does the category of “cruise missiles” include weapons that autonomously hunt targets within some area? Is it allowable to strike military targets, which may have people in or around them, on the basis of “certain pre-defined criteria”?
- What do we mean by “humans make all targeting decisions”?
According to the US policy, in the interpretation of its author Paul Scharre and his coauthor Michael Horowitz, if a human has decided that a target should be attacked, it is acceptable to send a machine to hunt, identify and attack that target fully autonomously. The policy calls such weapons “semi-autonomous.” Inevitably, the systems must make their identification decisions on the basis of applying certain pre-defined criteria to information derived from sensor data.
What if humans have decided that any tank, ship or plane of a certain type, found within certain geographical boundaries, is an enemy target and should be attacked? Scharre has told me privately that he does not believe the policy allows this, but it explicitly allows “target groups” to be “selected” by humans and their engagement to be delegated to machines. In practice, since particular objects can only be recognized by their physical characteristics, if there is more than one of a type in an area – as will inevitably occur in war – the use of such weapons inevitably entails their selecting one object or another for engagement.
Thus, it is not enough to distinguish autonomous weapons from those for which “humans make all targeting decisions” without engaging what this means in detail. Leaving this question unaddressed risks being presented with an “autonomous weapons ban” or “meaningful human control” law that is effectively meaningless because it allows killer robots to make every kind of tactical targeting and engagement decision provided that, under some doctrine, the decisions the robots make are the ones humans intended they should make.
To accept this would be in effect to say that killer robots are just fine, as long as they work as intended. To put this another way, the definition of a “semi-autonomous weapon system” appears to be “a fully autonomous weapon system that hasn’t malfunctioned yet.”
One high ranking activist has told me that it’s not the role of campaigners to stake out the details of an agreement, because the governments will do this and negotiate among themselves. I question this: shouldn’t civil society be alert to the danger of solutions that only paper-over the issues we’re concerned about, and shouldn’t we be clear with governments about what we want and what we would find inadequate or unacceptable? And isn’t the time to start being clear about this right at the beginning, when the issue is being framed in people’s minds and in the language that will later be used to discuss solutions?
In a recent article in The National Interest, Scharre and Horowitz pressed for surrender on the issue they have defined as one of “advanced precision-guided munitions” or, in the language of DoDD 3000.09, “fire-and-forget or lock-on-after-launch homing munitions” which the policy classifies as “semi-autonomous”:
Ban proponents should also clarify whether they see automated air and missile defense systems and fire and forget homing munitions, both of which have been used for decades by dozens of militaries, as problematic and targets of a ban.
Since the Campaign to Stop Killer Robots has identified its objective as a ban on “fully autonomous weapons,” it might seem logical for the Campaign to say it’s not concerned with “semi-autonomous” weapon systems. But this would be allowing Scharre and the Pentagon to define the terms used and the boundaries of our concern.
Rather, we should reject any lexicon in which systems that operate and make lethal decisions fully autonomously can be classified as less than fully autonomous weapon systems. And we should clarify that the decision to identify a bunch of sensor data as representing a particular object that is intended as a target is a lethal decision and one for which a human being must be held accountable.
The presumption must be that any systems that make such decisions are to be prohibited unless explicitly permitted (under stringently defined conditions). Permitted systems might include “grandfathered” fire-and-forget missiles of very limited capability, or purely defensive interception systems against uninhabited incoming munitions, provided such systems operate under accountable human supervision. As I’ve written here, this is a level of detail that will have to be negotiated, but civil society needs to be a part of that negotiation, and needs to be alert and clear about not giving too much away.
Again, the presumption should be to ban every kind of lethal decision by machines, and require accountable human control at every significant decision point. Any exceptions would be concessions for the sake of getting an agreement; but to concede that machine decisions are always OK if they correspond with human intentions (some go so far as to suggest human desires or even human values as interpreted by machines) would be to concede everything of principle, and reduce the issue to an engineering problem, one whose solubility would imply the acceptability of a future in which conflict and war could be up to 100% fully automated and entirely out of human control.