The words “autonomy” and “autonomous” can take on various shades of meaning and lead into fascinating discussions in various contexts, including philosophy, politics, law, sociology, biology and robotics. I propose that we should ignore all of this in the context of autonomous weapon systems (AWS).
We should also not speak of “degrees” or “levels” of autonomy, but seek principles and definitions that point, as directly as possible, to a Yes or No answer to the question “Is this an autonomous weapon?”
In fact, the US Department of Defense has given us a firm foundation for such a definition in its Nov. 2012 Directive on Autonomy in Weapon Systems. According to DoD Directive 3000.09, an autonomous weapon system (AWS) is
A weapon system that, once activated, can select and engage targets without further intervention by a human operator.
I propose that this should be universally accepted as a starting point.
What remains is to clarify the meaning of “select and engage targets” – and also, to negotiate which autonomous weapons might be exempted, and under what conditions, from a general ban on AWS.
Select and Engage
“Engage” is not too hard. Weapons engagement of a target means the commitment of an individual violent action, such as the firing of a gun, the dropping of a bomb or the crashing of a missile into a target, with or without the detonation of a warhead.
It is thus potentially a more fine-grained term than “attack” as used in international humanitarian law (IHL), the law of war. An “attack” might encompass many individual engagements, and accordingly an attack decision by a commander may authorize many engagement decisions, yet to be made, by individual soldiers or subsidiary commanders.
As used in DoDD 3000.09, engagement includes both lethal and non-lethal effects, against either personnel or materiel targets.
“Target” is slightly more difficult, if only because the status of an object as the target of a weapon is an attribute of the weapon system or persons controlling and commanding it, not of the object itself. When exactly does an object become a target?
Fortunately for present purposes, the word “target” appears in the Pentagon’s definition only as an object of the verbs “select” and “engage.” Therefore, we can say that an object becomes a target when it is selected; before that it may be a “potential target,” another term which appears in DoDD 3000.09.
Of course, selection must occur prior to engagement; an object harmed without having been selected is called “collateral damage,” be it a house, a garden, or a person.
The word “select” is where the serious issues hide, and where the clarity of DoDD 3000.09 begins to break down.
Please Select a Target
The Directive defines “target selection” thus:
The determination that an individual target or a specific group of targets is to be engaged.
But what constitutes “determination”? Is this an intelligence finding, a command decision given to the operator of a weapon system, or is it the operator’s command given to the system? It could be any or all of these, but if the operator is the final human in the so-called “kill chain” or “loop,” (and the one person in whom all of these functions may be combined), then the transaction between the operator and the system is where we should focus our attention.
The classic way for a weapon system operator to determine that a target is to be engaged is to throw a rock or spear toward it, thrust a sword or aim and fire a gun. The weapon may go astray, but there is no question that the intended target is selected by the human.
One may imagine that in a remotely operated, non-autonomous weapon system, target selection might mean that humans watch a video feed, and see something that looks, to them, like a valid military objective. A human commander determines (legally and operationally) that it is to be engaged. Using a mouse, joystick or touchscreen, a human operator designates the target to the system. This, plus some other positive action to verify the command, determines – materially – that the target is to be engaged.
However, DoDD 3000.09 admits a far more permissive interpretation of “target selection.” This becomes clear from its definition of “semi-autonomous weapon system (SAWS).”
Two types of SAWS are described. The first are
systems that employ autonomy for engagement-related functions including, but not limited to, acquiring, tracking, and identifying potential targets; cueing potential targets to human operators; prioritizing selected targets; timing of when to fire; or providing terminal guidance to home in on selected targets, provided that human control is retained over the decision to select individual targets and specific target groups for engagement.
This suggests that “target selection” could mean as little as giving a “Go” to engage after the system places a cursor over potential targets it has acquired, tracked, identified, and cued to human operators. Indeed, it is not at all clear what “human control… over the decision” means in this context.
The Meaning of Human Control
Noel Sharkey has discussed “meaningful human control” in terms of five “levels of control”:
- human deliberates about a target before initiating any attack
- program provides a list of targets and human chooses which to attack
- program selects target and human must approve before attack
- program selects target and human has restricted time to veto
- program selects target and initiates attack without human involvement
Sharkey focuses on deliberation as the thing that fulfills the requirements of IHL and makes “human control” meaningful. In the first level, the human “deliberates” and, like a rifleman, “initiates” the attack. In level 2, the human is more in the position of a senior commander, reviewing a “list of targets” and choosing among them. If no choice is made, there is no attack. However, in view of historical experience and “automation bias,” Sharkey concludes that the meaningfulness of this level of control is contingent. Machine rank-ordering of targets, for example, would tip the scale as “there would be a tendency to accept the top ranked target”. Insufficient information, time pressure, or a distracting environment could also render this level of human control meaningless.
Sharkey rules out level 3 and higher levels of autonomy as not meeting the requirement for human deliberation. However, it is not clear how “program selects target” differs from “program provides a list.” The list would probably not be on paper. Could it be a “gallery” like a dating site? Or a textual list with hyperlinks to intelligence reports, summaries of which open up automatically when the operator hovers over the list item? Could the system provide both a visual display of targets, and a textual list? Wouldn’t the visual display likely convey more information than just a short verbal description?
The best interpretation of the difference between levels 2 and 3 is that in the former, the human “decision maker” is being told “You decide what to target; here are some possibilities.” In the latter, the human is being told “Here are the targets; which ones should be hit, or not?” It’s a subtle difference, and in every technical and operational respect, the two situations might be identical.
Thus, while the focus on human deliberation – and the factors that might encourage and assist, or discourage and hinder it – is important and essential, the effort to enumerate and order a definitive hierarchy of autonomy levels once again leads to further questions.
The desired number of levels is two. We want to know which systems are autonomous.
Similarly, we should seek a definition of “human control” which is free of degrees of meaning. Weapons and conflict must always be under human control, period. Given any configuration of technical systems and human beings, we need to be able to decide whether this principle is satisfied, or not.
Of course, people may have more or less control of a machine, as becomes clear when we think of situations that arise with automobiles or nuclear power plants. But again, it is probably not possible to iron out all the dimensions of complexity to create a definitive hierarchy of control levels. Is it clear that the console providing more data and more knobs equates to more control? Perhaps, in general, but probably not always.
On the other hand, the operators of such machines accept moral and legal responsibilities to maintain control, and are held accountable for the consequences if they fail to do so.
I would propose, similarly, that acceptance of responsibility should be the standard for human control. Thus the principle of human control corresponds with the principle of human responsibility.
Just as commanders are not responsible for crimes committed by soldiers under their command, unless the commander directly ordered the criminal actions, so humans cannot accept responsibility for decisions made by machines. But if humans are to make the decisions, it is their responsibility to maintain control of the machines.
This makes it the responsibility of a human commander to determine whether she has sufficient information to make an attack decision, independent of any machine judgment. It also makes it the responsibility of a human operator to determine whether the system gives him effective control of its actions. Both should be held accountable for any “accidents.” It is not acceptable to say “the trigger was too sensitive” or “the computer said it was a valid target.”
Short-Circuiting Arms Control
The Pentagon’s distinction between semi-autonomous and autonomous weapons would also fail to support meaningful arms control.
The first type of SAWS includes systems which have every technical capability for fully autonomous target selection and engagement, requiring only a one-bit signal, the “Go” command, to initiate violence. A SAWS of this type could be converted to a fully autonomous weapon system (FAWS = AWS in the Pentagon’s lexicon) by a trivial modification or hack. Thus, permitting their development, production and deployment would mean permitting the perfection and proliferation of autonomous weaponry, with only the thinnest, unverifiable restraint remaining on their use as FAWS.
For the second type, there is not even a thin line to be crossed.
Fire and Forget
The second type of SAWS in DoDD 3000.09 comprises
“Fire and forget” or lock-on-after-launch homing munitions that rely on TTPs to maximize the probability that the only targets within the seeker’s acquisition basket when the seeker activates are those individual targets or specific target groups that have been selected by a human operator.
This means that SAWS, as defined by the Pentagon, would include a “homing munition” that, at some point after its launch, without further human intervention, acquires (i.e. detects the presence of, and roughly locates, via its seeker) a potential target or targets, thereafter tracks it or them, identifies it or them (either as particular objects or members of targeted classes, based on some criteria), and determines that one or more of the potential targets it has acquired and is tracking is to be engaged.
However, the definition (thus the policy), does not regard this as “target selection” by the weapon system. Rather, selection is deemed to have occurred when the human operator launched the weapon, relying on tactics, techniques and procedures (TTPs) to “maximize the probability” that there are no “targets [sic]” which the seeker might mistake for the targets “selected” (i.e., intended) by the human.
The “seeker’s acquisition basket” may be regarded as a product of two functions (which may be further decomposable): the seeker’s inherent response to various objects – as viewed from various directions and distances under various conditions – and any constraints in space and time which are applied by the operator.
For example, a “beyond visual range” (BVR) anti-aircraft missile such as AMRAAM or an “beyond line of sight” (BLOS) anti-ship missile such as LRASM (under development by Lockheed Martin for DARPA) follows a flight path roughly predetermined at the time of its launch, or it may receive updates telling it to follow a slightly different course, on the basis of new information that the target(s) have moved.
At some point, it begins looking for the target(s), that is, “the seeker activates.” This may in fact be a multistage process, as the missile approaches its target(s) and passive radar, active radar, IR and optical sensors successively come within range and tactical appropriateness (e.g., active radar may be avoided for reasons of stealth).
In effect, the seeker’s search area is restricted by the flight plan of the missile, which may also be programmed to deactivate if it fails to locate an appropriate target within a designated area or time frame.
Alternatively, a “loitering missile” or “wide area search munition” may be instructed to search within a designated “kill box.” Examples include Israel Aircraft Industries’s Harpy “suicide drone” which searches for enemy radars based on the signatures of their signals, and Lockheed Martin’s canceled LOCAAS, which was intended to autonomously recognize and attack a variety of target types, including personnel.
This is the meaning of “rely on TTPs”; the operator is supposed to follow procedures which minimize the probability that objects which the seeker might mistake for an intended target will be found within the area and time in which the missile will be searching.
Thus, if the seeker is not itself capable of distinguishing a cruiser from a cruise ship, a tanker plane from a jumbo jet, or a soldier from a civilian, it is the weapon system operator’s responsibility to follow procedures that at least minimize the probability that the wrong kind of object will be found within the seeker’s acquisition basket.
In practice, seekers will, either explicitly or implicitly, map their sensor data, via some computation, to a Bayesian probability that a potential target is an intended target. Since the missile must either engage that target or not, some threshold will have to be set for the decision to identify a target.
Once the decision to engage a particular target has been made, the missile may be considered to have “locked-on” to the target, which will be continually tracked without ambiguity. No new decision need be made, although complications may arise if the system is programmed to abort engagement on some conditions, such as its collection of further information about the target, or the possible presence of civilians.
Simpler missiles, in contrast must “lock on” before launch, under direct human control, and with a human operator selecting the target. If the target succeeds in evasive countermeasures, they may “break lock,” and then generally cannot re-acquire the target.
The Necessity of Decision
From this discussion, it becomes clear that the “lock-on-after-launch homing munition” does, in fact, make lethal decisions autonomously – it must decide whether to accept particular sensor data as the signature of an intended target, and to engage that target, or which one to engage if there are more than one; else it may continue searching, or decide to abort its mission.
It must make these decisions without further intervention by a human operator. The authors of DoDD 3000.09 may have wished to avoid using the word “select” in this context, but particularly in the case that there are multiple objects which might be engaged, it is clearly the weapon that makes the final determination that a particular potential target “is to be engaged.”
It also becomes clear that the system which autonomously decides whether, and which potential targets to engage, is not being controlled by humans when it does so. It is controlled by environmental conditions which may be somewhat unpredictable, and by its programming – that is, by itself. It is an autonomous system. There is no mystery in this; it is simply an operational fact.
Some may argue that the program was written by humans, but this position makes machine autonomy impossible, since machines owe their entire existence to humans. Besides, it is by no means certain that, even today, let alone in the future, software will be written and systems engineered solely by humans. In reality, the complexity of present-day technology already precludes meaningful human control or accountability for aberrant behavior. No one is going to be court-martialed for a bug.
The classification of “fire-and-forget or lock-on-after-launch homing munitions” as semi-autonomous may be convenient for the Pentagon, and some AWS ban proponents may prefer to accept it as well, in order to avoid the fact that many existing missile and some underwater systems fall into this category. However, this is an attempt to avoid the unavoidable, because not only do these systems already cross the line of de facto autonomous target selection and engagement, but there is no recess in the domain of fully autonomous weapons which is not accessible by further extension of this paradigm. To put it bluntly and colorfully, this is a loophole The Terminator could walk through.
Inappropriate Levels of Robot Judgment
A missile like LRASM may not fulfill the popular conception of a “killer robot,” but as depicted in a video published by Lockheed Martin, it is designed not only to autonomously maneuver around “pop up threats” posed by enemy radar and antimissile systems, and autonomously recognize the “hostile surface combat group” it was sent to attack, but also to discriminate the various ships in that group from one another, and determine which to engage. Several missiles might be sent in a salvo, and might coordinate with each other to determine an optimum target assignment and plan of attack.
The key fact is that these missiles would make final targeting decisions, based on fairly complicated criteria. DoDD 3000.09 places no upper limit on the sophistication of the sensors and computers or complexity of the algorithms and criteria for “lock-on-after-launch,” nor are “homing munitions” obviously limited to flying objects. If ships may be identified on the basis of signatures in their radar returns, might targeted killing not be carried out on the basis of face recognition and other biometrics?
More generally, might combatants not be identified by signatures of their behavior, dress, or any other cues that human soldiers rely on to distinguish them from civilians? Suppose a robot is sent out that is programmed to fire upon anyone who matches the biometric signatures of certain known hostile persons, or who is determined to be taking specific hostile actions. Is this not a fully autonomous weapon system?
The central conceit of DoDD 3000.09 is that “target selection” will be done only by humans or by autonomous weapon systems which are either anti-materiel weapons with non-kinetic effects, or else have been explicitly approved by senior officials; and in all cases, commanders shall exercise “appropriate levels of human judgment in the use of force.” But is there any limit to the levels of robot judgment that may be appropriately exercised by “semi-autonomous weapon systems”? DoDD 3000.09 provides no answer.
The negotiation of limits on SAWS is thus unavoidable if the project of a ban on FAWS is to succeed. It must be acknowledged that the distinction of the first kind of SAWS from FAWS is so thin as to be meaningless, both from the point of view of meaningful human control and of effective arms control. The distinction of the second kind of SAWS from FAWS is nonexistent.
The Line Drawn
I conclude that the first kind of SAWS must be subject to restrictions and accountability requirements if a ban on FAWS is to be meaningful; and the second type of SAWS must be acknowledged as actually a type of FAWS.
Is it too much to ask for a ban on missiles like AMRAAM and the autonomous Brimstone [see endnote], which already exist and have been used in war, or even missiles like LRASM which are close to reality? Perhaps, but arms control in the past has often entailed the scrapping of existing systems. Israel has already offered a successor to Harpy called Harop, which adds an electro-optical system and two-way control link so that the system is not necessarily fully autonomous.
It is also possible to say “No autonomous weapons, except these….” If it turns out political circumstances dictate that certain things must be grandfathered, the negotiation of explicit allowances and limits is preferable to leaving open a road to the perfection and proliferation of autonomous weapons of every kind. We should not accept a diplomatic fig leaf behind which an autonomous weapons arms race can rage unrestrained.
The point of such negotiations should not be to draw a line defining autonomy; that has already been done. A system is autonomous if it is operating without further human intervention. But a “ban on autonomous weapons” might provide explicit exceptions either for particular systems or particular classes of systems, based on detailed descriptions of their characteristics. Philosophical clarity is not the issue, for we have already achieved that. From this point forward, it’s just old-fashioned horse trading.
Endnote. The Peace Research Institute of Oslo (PRIO) has recently entered this discussion with a briefing paper on “Defining the Scope of Autonomy: Issues for the Campaign to Stop Killer Robots,” authored by PRIO researcher Nicholas Marsh, whose past work has focused on the arms trade and small arms in civil war. In this report, Marsh discusses at length the problem of fire-and-forget missiles such as MBDA’s Brimstone, which has both man-in-the-loop and fully autonomous modes. The latter capability, Marsh points out, has already been used in the Libya campaign.
Marsh also discusses the British Ministry of Defense’s statement about autonomous weapons, contained in its 2011 Joint Doctrine Note on The UK Approach to Unmanned Aircraft Systems. I think he places a bit too much emphasis on the statements on page 2-3 of the report about autonomous systems being “capable of understanding higher
level intent and direction” and being “in effect… self-aware” and “capable of achieving the same level of situational awareness as a human.” Whatever the authors of these statements had in mind – before the emergence of this issue into the public sphere – the British government has since issued statements indicating that it understands the issue of autonomous weapons in terms of human control, and that “the MoD currently has no intention to develop systems that operate without human intervention in the weapon command and control chain”. As Article36.org points out, this still leaves a good deal of ambiguity and wiggle room for the MoD, which is, as Marsh demonstrates, actually pursuing weapons and policies similar to those of the United States.
Marsh, like the Campaign itself to a large degree, grounds the issue in humanitarian disarmament, which he states “is designed to reduce human suffering rather than to manage the affairs of great powers.” Yet this prescriptive distinction is evidently problematic, since if humanitarian disarmament is to be effective in reducing human suffering, it must effectively restrict the actions of powers great and small; nor are these restrictions very different, other than in the particular weapons covered, from treaties not framed as ‘humanitarian’ in origin, such as strategic or conventional arms limits and bans.
In fact, most of Marsh’s paper points, whether intentionally or not, to the inadequacy of a model of humanitarian disarmament rooted only in the landmines and cluster munitions bans. These treaties do a great deal to reduce civilian suffering in war and its aftermath, but their restriction on the freedom of action of the major powers is relatively marginal – and those same powers nevertheless have largely refused to sign on, although their actions are to some extent inhibited by the norms that have been established.
In the case of autonomous weapons, we are seeking to call off the next great arms race.
Autonomous weapons are potentially the most important and foundation-shaking development in military and global security affairs in decades. They are not merely a threat to human security from their potential for indiscriminate slaughter, but are even a threat to international peace and security.
Therefore, we must address the issue in its widest scope, and on the basis of the strongest possible foundations: the principles of human control, responsibility, dignity, and sovereignty, and that conflict is human – and must not become autonomous.