Killer Robots and Laser-Guided Bombs: A reply to Horowitz & Scharre

MarkoffMissilesStoryFollowing the New York Times’s front-page publication of an article by John Markoff about the concerns I and others have raised about existing and emerging missile systems that search for, identify and attack targets fully autonomously, Michael Horowitz and Paul Scharre weighed in with a piece in Politico.com/magazine accusing myself (by name) and other unnamed “activists” of “shifting” the issue of “lethal autonomous weapon systems (LAWS)” (the rubric used by the Convention on Certain Conventional Weapons – CCW) so as to include “precision guided munitions” (PGMs) which, they argue, “have saved countless civilian lives over the last generation”.

I have much to say about the rhetoric used by Horowitz and Scharre, but first let me cut to the heart of the matter. The weapon systems at issue here are not the laser-guided, television-guided or GPS-guided bombs and missiles that have been in use for decades. I was specifically quoted in the Times article talking about the Long-Range Anti-Ship Missile (LRASM), currently in advanced development by Lockheed-Martin for DARPA and the US Navy. Other systems mentioned both in Markoff’s article and in Horowitz and Scharre’s include Kongsberg’s Joint Strike Missile (Norway), a variant, still under development, of its Naval Strike Strike Missile which is just entering service, and MBDA’s Brimstone missile (UK), which was first introduced in 2005.

What these missiles have in common is that, while they are launched in the general direction of their intended targets, and told what the targets should look like to their various sensors, in most cases the targets are out of range of the sensors at the time of launch, so the missiles are launched without being “locked on.” Although in some cases the missiles may receive updates in flight, and Brimstone, for example, has the capability to be laser-guided to the target by a (presumably human) spotter, all these systems also have the capability to autonomously search for targets whose sensor signatures match those in the missiles’ onboard computer databases, and for the computers to decide that what the sensors are seeing is in fact a legitimate target, and to attack it without further human intervention.

Horowitz and Scharre call these systems “next-generation precision-guided weapons,” and argue that they are “not the self-targeting ‘killer robots’ of our nightmares.” Yet Department of Defense Directive 3000.09, “Autonomy in Weapon Systems,” — the United States’ de facto policy for killer robots, of which Scharre is reputed to be a principal author — refers to such systems as “’Fire and forget’ or lock-on-after-launch homing munitions”, and classifies them as “semi-autonomous weapon systems” which are governed by the Directive. As well they should be, since the definitions given in the Directive are sufficiently broad that “semi-autonomous” weapons could include anything from the Brimstone missile to — as I’ll explain below — The Terminator himself (or, itself).

And what is the policy? According to Directive 3000.09, provided semi-autonomous weapon systems meet certain unremarkable criteria, which one might expect would apply to any weapon system (thorough testing, security measures, appropriate doctrines for use, etc. – plus, clearance by the Pentagon’s lawyers), they are fully green-lighted for immediate development, acquisition and use.

Note that the category of fire-and forget or lock-on-after-launch homing munitions does not include laser-guided or television-guided munitions that home on a target that is held continuously by a human operator, nor does it include GPS-guided munitions which simply go to a given location and explode. Those weapons are not governed by Directive 3000.09. But in response to questions by Markoff, the Navy confirmed that it does classify LRASM as “semi-autonomous” under the Directive.

When is a duck?

Equivocation is the ancient rhetorical strategy of exploiting ambiguities in the meaning of words and terms, or equivalently, vagueness in the drawing of boundaries around categories, in order to confuse and confound thinking. Horowitz and Scharre accuse me of “Defining the precision-guided munitions used by countries around the world as the same thing as the Terminator-style robots of the movies”. But I am doing no such thing. Rather, it is they who are trying to equivocate “semi-autonomous weapon systems,” as defined (in Scharre’s own work) by the Pentagon – and thus implicitly recognized as a departure into autonomy – with legacy PGMs that are not generally described as any way autonomous.

According to Horowitz and Scharre, we learn little from the fact that the makers of these emerging and future missiles describe them as autonomous, since “defense contractors love to pepper the descriptions of their weapons with the word ‘autonomous,’ emphasizing their advanced features… .” But “autonomous” isn’t just a buzzword, it is commonly understood to mean that a system operates without human or other outside intervention; moreover, since at least the 2012 release of Directive 3000.09, if a weapon system is described as autonomous, that is generally understood as referring to autonomy in the process of making the kill, and not some ancillary function. Interestingly, LockMart has updated their LRASM home page, which used to describe it as “autonomous,” so that it now says “semi-autonomous.” Apparently, they believe the words mean something.

But this is indeed the point at which the terms in use, and the Pentagon’s definitions of them, become slippery. Horowitz and Scharre argue that these missiles aren’t truly autonomous, since “a person selects the targets they are engaging.” At issue here is not so much the meaning of “autonomous” as it is the word “select.” (“Engage” is straightforward; it means actually attempt to hit a particular target.)

Directive 3000.09 defines “target selection” as “The determination that an individual target or a specific group of targets is to be engaged.” That’s all it says, and that leaves a great deal unexplained.

Does “the determination” refer to a policy decision, an intelligence finding, a decision at the level of a tactical commander, or is it the action of a weapon system operator in giving commands to a machine? Does the mere loading of certain criteria for target identification fully determine that a particular target, and no other, will be engaged, or does the machine still need to make the decision that it has found a target which matches those criteria?

What does “specific group of targets” mean? Could we, for example, task a salvo of missiles to seek and destroy any T-62 tanks found within certain boundaries? Or task a robot to kill any of a list of known enemy combatants? Or an individual targeted person?

It isn’t even clear that “target selection” could not be defined in terms of behavioral criteria, such as “the sniper who’s firing on us.” Or how about “any person observed to be firing a weapon toward friendly forces”? Could this be interpreted as constituting “a specific group of targets”? Perhaps not, but Directive 3000.09 does not explicitly exclude this. It matters tremendously, because unless a line is drawn somewhere, it remains unclear how “semi-autonomous weapon systems” differ from anything that we would call fully autonomous.

To put it colorfully, The Terminator, with its mission to find and kill a specific person, could then be considered a semi-autonomous weapon system, and even the mayhem it caused along the way, and the killing of innocent people named Sarah Connor, could be excused as acceptable collateral damage in the service of a paramount military objective (prevent Sarah Connor from giving birth to the Resistance leader). This is pop culture and sci-fi, of course, but the point is, Where will we draw the line, once we have decided to let machines decide on their own that they’ve found valid targets, and attack them?

For Horowitz and Scharre, a weapon that operates autonomously remains “semi-autonomous” as long as “a person selects the targets.” Yet this leaves it to a machine to decide when it has found and identified a target that a person has selected (whatever that means). Does the mere fact that some person has in their mind an intention, that certain targets should be attacked, invalidate all of our concerns about letting machines make tactical decisions, including the decision to engage this particular target now?

Probability of atrocity

Directive 3000.09 attempts to preempt this issue by saying that the “lock-on-after-launch homing munitions” to be considered as “semi-autonomous weapon systems” are those that “rely on TTPs to maximize the probability that the only targets within the seeker’s acquisition basket when the seeker activates are those individual targets or specific target groups that have been selected by a human operator.” TTPs are “tactics, techniques and procedures,” micro-doctrines for the use of particular weapon systems. This is saying, in other words, that if a missile’s seeker might not be able to distinguish a military aircraft from a civilian airliner, then whoever decides to launch the missile is supposed to follow procedures that will ensure that there won’t be any airliners near where the missile is being sent to look for military targets – or at least “minimize the probability.”

This is more or less how things are supposed to work today, but as shown by the recent case of Malaysian Air 17, and the more distant memory of Iran Air 655, behavior in combat often fails to conform to doctrine. These tragedies were caused by human error, perhaps, but human error in the use of “semi-autonomous weapon systems” where, indeed, engagement of the target was intended by human operators, who made their decisions from long range, with insufficient information, and tasked their robots to carry them out.

This prompts a reality check on claims that PGMs “save lives” – and that killer robots will. The first question that should be asked is, “Compared with what?” Yes, high-altitude bombing of military facilities sited in urban areas is likely to kill many times more people if carried out with unguided bombs than if carried out with precision-guided weapons. But are you more likely to attempt such an attack, and perhaps kill some civilians, if you have PGMs? If you do not, are you more likely to judge it not worth the risk?

The United States has carried out more than 500 non-battlefield targeted killing drone strikes, killing at least hundreds of civilians (and injuring thousands) in the pursuit of a relatively small number, dozens, of intended targets. Would we do this if we did not have drones? Does anybody believe we would instead be carpet-bombing northern Pakistan, Yemen and Somalia?

What will be the actual impact of autonomous weapons on civilians, as well as soldiers and global security itself? Glib comparisons of the accuracy of guided vs. unguided weapons do not answer these questions.

Horowitz and Scharre note that “Human Rights Watch recently asserted that the use of unguided munitions in populated areas violates international law.” But you know, HRW did not say that populated areas should be attacked with guided weapons. Moreover, apart from the absurdity of the idea that any weapon “saves lives,” we have very good reasons to think that autonomous weapons pose a threat to humanity itself. We are at the cusp of a new global strategic arms race, a quarter century after finding, with much surprise, that we had survived the last one. Banning killer robots is not only a matter of human rights and humanitarian law. It is also a critical objective for arms control.

Join us now, on a journey of discovery and destruction…

Consider what LRASM is supposed to do. Details are classified, but it is believed to use both passive and active radar, plus optical and infrared sensors, and sophisticated signal processing and artificial intelligence technologies to autonomously hunt and kill. A highly entertaining video released by Lockheed Martin portrays a typical engagement sequence.

A “hostile surface action group (SAG)” lies hundreds of kilometers away, and includes both targeted and untargeted ships. The missiles will recognize the targeted ships in part by features of their geometry (which may be reflected both in optical imagery and radar returns) (0:35 – 0:55). The SAG has been spotted by satellites (1:00 – 1:10), drones or some other source of intelligence, but by the time the missiles are close enough to see the SAG, it will have moved so far that the missiles will need to search a wide area and discriminate the SAG from other ships.

Launched from surface ships or aircraft, a salvo of the LRASMs speed toward the SAG (1:20 – 2:20). For a while they are able to receive targeting and guidance updates from both the launching ships and satellites, but then they pass into a “communications and GPS denied environment” (which may be due to destruction of the satellites in an ASAT attack, destruction of the launching ships or other command facilities, or to jamming, or to the need for stealth, or any combination of these factors) (2:35). From this point forward, the missiles are acting fully autonomously, although they can still talk to each other (their highly versatile radio systems integrate radar, jamming and communications). They are following a planned route, but along the way, they may spot unexpected threats, such as another hostile ship or radar signal (in general, you can see active radar before it can see you). They autonomously decide to reroute around the threats (2:50 – 3:20).

As the missiles approach the vicinity of the SAG (3:30), their sensors must scan a wide area of uncertainty (AOU) which will turn out to contain commercial ships as well as the SAG. The missiles proceed through the steps of “AOU Reduction,” “target identification” and “terminal routing,” choosing their own routes to the targets identified by “criteria match” (3:30 – 3:50). By this time, the missiles can be seen by the targeted ship’s own active radar, so as they approach, they will descend to sea-skimming altitude, and apply other “enhanced survivability techniques (4:00 – 4:20). Presumably, by talking to one another, the salvo of missiles can plan a more effective attack, from multiple directions.

Turning on their own active radar (4:23), the missiles bypass untargeted ships while avoiding any threats they might pose. They scan the targeted ships to determine optimal aim points. At the last moment (4:45), the targeted ships’ close-in weapon systems swing into action and attempt to engage the LRASMs, but it’s too late.

Wham. Thunder. Dark skies. Lockheed Martin.

Markoff slightly misquoted me in the Times article, which actually was largely an outgrowth of my discussion of LRASM with him when we met at the DARPA Robotics Challenge in Miami in December last year. Horowitz and Scharre may have been a bit gleeful in re-quoting the resulting somewhat crazy-sounding statement that LRASM represents “pretty sophisticated stuff that I would call artificial intelligence outside human control.” It should have read “operating outside human control,” but either way, I stand by the statement.

No, I don’t believe LRASM is an evil AI bent on exterminating humanity, but it certainly looks to be state-of-the-art technology, and if its “concept of operations” doesn’t qualify it as a fully autonomous weapon system, what exactly would qualify? That is the question I am posing, and I won’t take “something that actually selects its own targets” as an answer, unless you can explain exactly what “selects” means, and how that differs from “interprets a bunch of sensor data in terms of its own database and algorithms, and on the basis of that comes to its own determination that a particular ‘target’—i.e. collection of pixels and pulses— and not that one or the other one over there, is to be engaged.”

Arms races and arms control talks

Of course, it isn’t only the United States that is developing this technology; as previously noted, the UK and Norway also have systems with some of the same capabilities, and so do Russia, China and many other nations have such systems both in use and under development. So-called “beyond visual range” missiles such as the American AMRAAM air-to-air missile and loitering ground attack systems such as the Israeli Harpy anti-radar suicide drone and the cancelled American multirole loitering cruise missiles LOCAAS and NLOS-LAM, and many others, must all be considered in this light.

Should the ongoing LAWS negotiations seek to ban all such “fire-and-forget” or “lock-on-after-launch homing munitions,” or perhaps to regulate and draw some boundaries around them? Horowitz and Scharre warn that “An expansion of the agenda to include precision-guided weapons would most likely end CCW discussions.” But again, they are the ones attempting to conflate autonomous weapons with precision human-guided ones.

If banning killer robots really required rolling back military technology to the 1960s, I’d agree that raising this issue would lead to a collapse of the talks. But failure, or refusal, to address and engage the thorny issues raised by existing and emerging “semi-autonomous weapon systems,” as defined by the Pentagon, is equally likely to lead to such a collapse, because it would leave us without any coherent way to define boundaries around what we want to prevent, other than distinguishing “existing” from “future” weapons; and because, in reality, the action in autonomous weapons development today is almost entirely in the area of systems that plausibly fit under the Pentagon’s definition of “semi-autonomous.” The implicit theory is that tactics, techniques and procedures, combined with the discrimination capabilities of sensors and computers, will suffice to prevent most, if not all, unintended engagements. This is basically how things have been done until now, but it is not an acceptable framework for the future, because it basically allows the development and use of autonomous weapons to proceed indefinitely, without ever bumping into any limits that have yet been written down.

Most of these new weapons are intended for force-on-force combat on and under the high seas, in the air and on battlefields away from civilian populations, to be used at the discretion of commanders who will be tasked to take any residual threats to civilians into account. Nobody had any intention of putting Robocops or Terminators onto urban streets any time in the near future, but the world’s major powers are gearing up in earnest for a new strategic arms race, one in which they will confront each other. LRASM is primarily targeted at China, and probably Russia now that the old Cold War is reputedly back on. The greatest threat to civilians is the same as it’s been for seven decades: nuclear war.

What is at stake in the CCW LAWS talks, and in the entire issue of killer robots (which does not go away if LAWS collapses), is quite simply, everything. That’s big enough stakes to justify a reexamination of trends in military technology that, indeed, have been under way for decades now. Perhaps the solution will be as simple as “grandfathering” everything that already exists. One can lay out broad principles, and then enumerate specific exceptions; it’s always possible to say, “No guns allowed, except for antiques.” Or perhaps, more plausibly, the construction of a robust and enduring arms control regime will require rolling back some things, while allowing others. It wouldn’t be the first time that concluding an arms control agreement has required scrapping some existing weapons.

What I am saying is that this kind of horse trading is going to be unavoidable, and trying to push it aside is a way to steer the discussion toward incoherence, sophistry, and ultimately irrelevance, while the main thrust of the robot arms race surges forward with greater and greater momentum.

One thought on “Killer Robots and Laser-Guided Bombs: A reply to Horowitz & Scharre

  1. Excellent article that goes to the heart of the discussion. There can be no constructive debate on these issues without a common understanding of “autonomous”, “semi-autonomous”, and more notably “decide” and “select”. Gubrud is posing the right questions. I hope to contribute with my current research.

Leave a Reply

Your email address will not be published. Required fields are marked *