Autonomy without Mystery: Where do you draw the line?

The words “autonomy” and “autonomous” can take on various shades of meaning and lead into fascinating discussions in various contexts, including philosophy, politics, law, sociology, biology and robotics. I propose that we should ignore all of this in the context of autonomous weapon systems (AWS).

We should also not speak of “degrees” or “levels” of autonomy, but seek principles and definitions that point, as directly as possible, to a Yes or No answer to the question “Is this an autonomous weapon?”

In fact, the US Department of Defense has given us a firm foundation for such a definition in its Nov. 2012 Directive on Autonomy in Weapon Systems. According to DoD Directive 3000.09, an autonomous weapon system (AWS) is

A weapon system that, once activated, can select and engage targets without further intervention by a human operator.

I propose that this should be universally accepted as a starting point.

What remains is to clarify the meaning of “select and engage targets” – and also, to negotiate which autonomous weapons might be exempted, and under what conditions, from a general ban on AWS.

Select and Engage

“Engage” is not too hard. Weapons engagement of a target means the commitment of an individual violent action, such as the firing of a gun, the dropping of a bomb or the crashing of a missile into a target, with or without the detonation of a warhead.

It is thus potentially a more fine-grained term than “attack” as used in international humanitarian law (IHL), the law of war. An “attack” might encompass many individual engagements, and accordingly an attack decision by a commander may authorize many engagement decisions, yet to be made, by individual soldiers or subsidiary commanders.

As used in DoDD 3000.09, engagement includes both lethal and non-lethal effects, against either personnel or materiel targets.

“Target” is slightly more difficult, if only because the status of an object as the target of a weapon is an attribute of the weapon system or persons controlling and commanding it, not of the object itself. When exactly does an object become a target?

Fortunately for present purposes, the word “target” appears in the Pentagon’s definition only as an object of the verbs “select” and “engage.” Therefore, we can say that an object becomes a target when it is selected; before that it may be a “potential target,” another term which appears in DoDD 3000.09.

Of course, selection must occur prior to engagement; an object harmed without having been selected is called “collateral damage,” be it a house, a garden, or a person.

The word “select” is where the serious issues hide, and where the clarity of DoDD 3000.09 begins to break down.

Please Select a Target

The Directive defines “target selection” thus:

The determination that an individual target or a specific group of targets is to be engaged.

But what constitutes “determination”? Is this an intelligence finding, a command decision given to the operator of a weapon system, or is it the operator’s command given to the system? It could be any or all of these, but if the operator is the final human in the so-called “kill chain” or “loop,” (and the one person in whom all of these functions may be combined), then the transaction between the operator and the system is where we should focus our attention.

The classic way for a weapon system operator to determine that a target is to be engaged is to throw a rock or spear toward it, thrust a sword or aim and fire a gun. The weapon may go astray, but there is no question that the intended target is selected by the human.

One may imagine that in a remotely operated, non-autonomous weapon system, target selection might mean that humans watch a video feed, and see something that looks, to them, like a valid military objective. A human commander determines (legally and operationally) that it is to be engaged. Using a mouse, joystick or touchscreen, a human operator designates the target to the system. This, plus some other positive action to verify the command, determines – materially – that the target is to be engaged.

However, DoDD 3000.09 admits a far more permissive interpretation of “target selection.” This becomes clear from its definition of “semi-autonomous weapon system (SAWS).”

Two types of SAWS are described. The first are

systems that employ autonomy for engagement-related functions including, but not limited to, acquiring, tracking, and identifying potential targets; cueing potential targets to human operators; prioritizing selected targets; timing of when to fire; or providing terminal guidance to home in on selected targets, provided that human control is retained over the decision to select individual targets and specific target groups for engagement.

This suggests that “target selection” could mean as little as giving a “Go” to engage after the system places a cursor over potential targets it has acquired, tracked, identified, and cued to human operators. Indeed, it is not at all clear what “human control… over the decision” means in this context.

The Meaning of Human Control

Noel Sharkey has discussed “meaningful human control” in terms of five “levels of control”:

  1. human deliberates about a target before initiating any attack
  2. program provides a list of targets and human chooses which to attack
  3. program selects target and human must approve before attack
  4. program selects target and human has restricted time to veto
  5. program selects target and initiates attack without human involvement

Sharkey focuses on deliberation as the thing that fulfills the requirements of IHL and makes “human control” meaningful. In the first level, the human “deliberates” and, like a rifleman, “initiates” the attack. In level 2, the human is more in the position of a senior commander, reviewing a “list of targets” and choosing among them. If no choice is made, there is no attack. However, in view of historical experience and “automation bias,” Sharkey concludes that the meaningfulness of this level of control is contingent. Machine rank-ordering of targets, for example, would tip the scale as “there would be a tendency to accept the top ranked target”. Insufficient information, time pressure, or a distracting environment could also render this level of human control meaningless.

Sharkey rules out level 3 and higher levels of autonomy as not meeting the requirement for human deliberation. However, it is not clear how “program selects target” differs from “program provides a list.” The list would probably not be on paper. Could it be a “gallery” like a dating site? Or a textual list with hyperlinks to intelligence reports, summaries of which open up automatically when the operator hovers over the list item? Could the system provide both a visual display of targets, and a textual list? Wouldn’t the visual display likely convey more information than just a short verbal description?

The best interpretation of the difference between levels 2 and 3 is that in the former, the human “decision maker” is being told “You decide what to target; here are some possibilities.” In the latter, the human is being told “Here are the targets; which ones should be hit, or not?” It’s a subtle difference, and in every technical and operational respect, the two situations might be identical.

Thus, while the focus on human deliberation – and the factors that might encourage and assist, or discourage and hinder it – is important and essential, the effort to enumerate and order a definitive hierarchy of autonomy levels once again leads to further questions.

The desired number of levels is two. We want to know which systems are autonomous.

Similarly, we should seek a definition of “human control” which is free of degrees of meaning. Weapons and conflict must always be under human control, period. Given any configuration of technical systems and human beings, we need to be able to decide whether this principle is satisfied, or not.

Of course, people may have more or less control of a machine, as becomes clear when we think of situations that arise with automobiles or nuclear power plants. But again, it is probably not possible to iron out all the dimensions of complexity to create a definitive hierarchy of control levels. Is it clear that the console providing more data and more knobs equates to more control? Perhaps, in general, but probably not always.

On the other hand, the operators of such machines accept moral and legal responsibilities to maintain control, and are held accountable for the consequences if they fail to do so.

I would propose, similarly, that acceptance of responsibility should be the standard for human control. Thus the principle of human control corresponds with the principle of human responsibility.

Just as commanders are not responsible for crimes committed by soldiers under their command, unless the commander directly ordered the criminal actions, so humans cannot accept responsibility for decisions made by machines. But if humans are to make the decisions, it is their responsibility to maintain control of the machines.

This makes it the responsibility of a human commander to determine whether she has sufficient information to make an attack decision, independent of any machine judgment. It also makes it the responsibility of a human operator to determine whether the system gives him effective control of its actions. Both should be held accountable for any “accidents.” It is not acceptable to say “the trigger was too sensitive” or “the computer said it was a valid target.”

This also suggests a system of accountability as a possible basis for the implementation of an AWS ban and verification of compliance with it.

Short-Circuiting Arms Control

The Pentagon’s distinction between semi-autonomous and autonomous weapons would also fail to support meaningful arms control.

The first type of SAWS includes systems which have every technical capability for fully autonomous target selection and engagement, requiring only a one-bit signal, the “Go” command, to initiate violence. A SAWS of this type could be converted to a fully autonomous weapon system (FAWS = AWS in the Pentagon’s lexicon) by a trivial modification or hack. Thus, permitting their development, production and deployment would mean permitting the perfection and proliferation of autonomous weaponry, with only the thinnest, unverifiable restraint remaining on their use as FAWS.

For the second type, there is not even a thin line to be crossed.

Fire and Forget

The second type of SAWS in DoDD 3000.09 comprises

“Fire and forget” or lock-on-after-launch homing munitions that rely on TTPs to maximize the probability that the only targets within the seeker’s acquisition basket when the seeker activates are those individual targets or specific target groups that have been selected by a human operator.

This means that SAWS, as defined by the Pentagon, would include a “homing munition” that, at some point after its launch, without further human intervention, acquires (i.e. detects the presence of, and roughly locates, via its seeker) a potential target or targets, thereafter tracks it or them, identifies it or them (either as particular objects or members of targeted classes, based on some criteria), and determines that one or more of the potential targets it has acquired and is tracking is to be engaged.

However, the definition (thus the policy), does not regard this as “target selection” by the weapon system. Rather, selection is deemed to have occurred when the human operator launched the weapon, relying on tactics, techniques and procedures (TTPs) to “maximize the probability” that there are no “targets [sic]” which the seeker might mistake for the targets “selected” (i.e., intended) by the human.

The “seeker’s acquisition basket” may be regarded as a product of two functions (which may be further decomposable): the seeker’s inherent response to various objects – as viewed from various directions and distances under various conditions – and any constraints in space and time which are applied by the operator.

For example, a “beyond visual range” (BVR) anti-aircraft missile such as AMRAAM or an “beyond line of sight” (BLOS) anti-ship missile such as LRASM (under development by Lockheed Martin for DARPA) follows a flight path roughly predetermined at the time of its launch, or it may receive updates telling it to follow a slightly different course, on the basis of new information that the target(s) have moved.

At some point, it begins looking for the target(s), that is, “the seeker activates.” This may in fact be a multistage process, as the missile approaches its target(s) and passive radar, active radar, IR and optical sensors successively come within range and tactical appropriateness (e.g., active radar may be avoided for reasons of stealth).

In effect, the seeker’s search area is restricted by the flight plan of the missile, which may also be programmed to deactivate if it fails to locate an appropriate target within a designated area or time frame.

Alternatively, a “loitering missile” or “wide area search munition” may be instructed to search within a designated “kill box.” Examples include Israel Aircraft Industries’s Harpy “suicide drone” which searches for enemy radars based on the signatures of their signals, and Lockheed Martin’s canceled LOCAAS, which was intended to autonomously recognize and attack a variety of target types, including personnel.

This is the meaning of “rely on TTPs”; the operator is supposed to follow procedures which minimize the probability that objects which the seeker might mistake for an intended target will be found within the area and time in which the missile will be searching.

Thus, if the seeker is not itself capable of distinguishing a cruiser from a cruise ship, a tanker plane from a jumbo jet, or a soldier from a civilian, it is the weapon system operator’s responsibility to follow procedures that at least minimize the probability that the wrong kind of object will be found within the seeker’s acquisition basket.

In practice, seekers will, either explicitly or implicitly, map their sensor data, via some computation, to a Bayesian probability that a potential target is an intended target. Since the missile must either engage that target or not, some threshold will have to be set for the decision to identify a target.

Once the decision to engage a particular target has been made, the missile may be considered to have “locked-on” to the target, which will be continually tracked without ambiguity. No new decision need be made, although complications may arise if the system is programmed to abort engagement on some conditions, such as its collection of further information about the target, or the possible presence of civilians.

Simpler missiles, in contrast must “lock on” before launch, under direct human control, and with a human operator selecting the target. If the target succeeds in evasive countermeasures, they may “break lock,” and then generally cannot re-acquire the target.

The Necessity of Decision

From this discussion, it becomes clear that the “lock-on-after-launch homing munition” does, in fact, make lethal decisions autonomously – it must decide whether to accept particular sensor data as the signature of an intended target, and to engage that target, or which one to engage if there are more than one; else it may continue searching, or decide to abort its mission.

It must make these decisions without further intervention by a human operator. The authors of DoDD 3000.09 may have wished to avoid using the word “select” in this context, but particularly in the case that there are multiple objects which might be engaged, it is clearly the weapon that makes the final determination that a particular potential target “is to be engaged.”

It also becomes clear that the system which autonomously decides whether, and which potential targets to engage, is not being controlled by humans when it does so. It is controlled by environmental conditions which may be somewhat unpredictable, and by its programming – that is, by itself. It is an autonomous system. There is no mystery in this; it is simply an operational fact.

Some may argue that the program was written by humans, but this position makes machine autonomy impossible, since machines owe their entire existence to humans. Besides, it is by no means certain that, even today, let alone in the future, software will be written and systems engineered solely by humans. In reality, the complexity of present-day technology already precludes meaningful human control or accountability for aberrant behavior. No one is going to be court-martialed for a bug.

The classification of “fire-and-forget or lock-on-after-launch homing munitions” as semi-autonomous may be convenient for the Pentagon, and some AWS ban proponents may prefer to accept it as well, in order to avoid the fact that many existing missile and some underwater systems fall into this category. However, this is an attempt to avoid the unavoidable, because not only do these systems already cross the line of de facto autonomous target selection and engagement, but there is no recess in the domain of fully autonomous weapons which is not accessible by further extension of this paradigm. To put it bluntly and colorfully, this is a loophole The Terminator could walk through.

Inappropriate Levels of Robot Judgment

A missile like LRASM may not fulfill the popular conception of a “killer robot,” but as depicted in a video published by Lockheed Martin, it is designed not only to autonomously maneuver around “pop up threats” posed by enemy radar and antimissile systems, and autonomously recognize the “hostile surface combat group” it was sent to attack, but also to discriminate the various ships in that group from one another, and determine which to engage. Several missiles might be sent in a salvo, and might coordinate with each other to determine an optimum target assignment and plan of attack.

The key fact is that these missiles would make final targeting decisions, based on fairly complicated criteria. DoDD 3000.09 places no upper limit on the sophistication of the sensors and computers or complexity of the algorithms and criteria for “lock-on-after-launch,” nor are “homing munitions” obviously limited to flying objects. If ships may be identified on the basis of signatures in their radar returns, might targeted killing not be carried out on the basis of face recognition and other biometrics?

More generally, might combatants not be identified by signatures of their behavior, dress, or any other cues that human soldiers rely on to distinguish them from civilians? Suppose a robot is sent out that is programmed to fire upon anyone who matches the biometric signatures of certain known hostile persons, or who is determined to be taking specific hostile actions. Is this not a fully autonomous weapon system?

The central conceit of DoDD 3000.09 is that “target selection” will be done only by humans or by autonomous weapon systems which are either anti-materiel weapons with non-kinetic effects, or else have been explicitly approved by senior officials; and in all cases, commanders shall exercise “appropriate levels of human judgment in the use of force.” But is there any limit to the levels of robot judgment that may be appropriately exercised by “semi-autonomous weapon systems”? DoDD 3000.09 provides no answer.

The negotiation of limits on SAWS is thus unavoidable if the project of a ban on FAWS is to succeed. It must be acknowledged that the distinction of the first kind of SAWS from FAWS is so thin as to be meaningless, both from the point of view of meaningful human control and of effective arms control. The distinction of the second kind of SAWS from FAWS is nonexistent.

The Line Drawn

I conclude that the first kind of SAWS must be subject to restrictions and accountability requirements if a ban on FAWS is to be meaningful; and the second type of SAWS must be acknowledged as actually a type of FAWS.

Is it too much to ask for a ban on missiles like AMRAAM and the autonomous Brimstone [see endnote], which already exist and have been used in war, or even missiles like LRASM which are close to reality? Perhaps, but arms control in the past has often entailed the scrapping of existing systems. Israel has already offered a successor to Harpy called Harop, which adds an electro-optical system and two-way control link so that the system is not necessarily fully autonomous.

It is also possible to say “No autonomous weapons, except these….” If it turns out political circumstances dictate that certain things must be grandfathered, the negotiation of explicit allowances and limits is preferable to leaving open a road to the perfection and proliferation of autonomous weapons of every kind. We should not accept a diplomatic fig leaf behind which an autonomous weapons arms race can rage unrestrained.

The point of such negotiations should not be to draw a line defining autonomy; that has already been done. A system is autonomous if it is operating without further human intervention. But a “ban on autonomous weapons” might provide explicit exceptions either for particular systems or particular classes of systems, based on detailed descriptions of their characteristics. Philosophical clarity is not the issue, for we have already achieved that. From this point forward, it’s just old-fashioned horse trading.

 


Endnote. The Peace Research Institute of Oslo (PRIO) has recently entered this discussion with a briefing paper on “Defining the Scope of Autonomy: Issues for the Campaign to Stop Killer Robots,” authored by PRIO researcher Nicholas Marsh, whose past work has focused on the arms trade and small arms in civil war. In this report, Marsh discusses at length the problem of fire-and-forget missiles such as MBDA’s Brimstone, which has both man-in-the-loop and fully autonomous modes. The latter capability, Marsh points out, has already been used in the Libya campaign.

Marsh also discusses the British Ministry of Defense’s statement about autonomous weapons, contained in its 2011 Joint Doctrine Note on The UK Approach to Unmanned Aircraft Systems. I think he places a bit too much emphasis on the statements on page 2-3 of the report about autonomous systems being “capable of understanding higher
level intent and direction” and being “in effect… self-aware” and “capable of achieving the same level of situational awareness as a human.” Whatever the authors of these statements had in mind – before the emergence of this issue into the public sphere – the British government has since issued statements indicating that it understands the issue of autonomous weapons in terms of human control, and that “the MoD currently has no intention to develop systems that operate without human intervention in the weapon command and control chain”. As Article36.org points out, this still leaves a good deal of ambiguity and wiggle room for the MoD, which is, as Marsh demonstrates, actually pursuing weapons and policies similar to those of the United States.

Marsh, like the Campaign itself to a large degree, grounds the issue in humanitarian disarmament, which he states “is designed to reduce human suffering rather than to manage the affairs of great powers.” Yet this prescriptive distinction is evidently problematic, since if humanitarian disarmament is to be effective in reducing human suffering, it must effectively restrict the actions of powers great and small; nor are these restrictions very different, other than in the particular weapons covered, from treaties not framed as ‘humanitarian’ in origin, such as strategic or conventional arms limits and bans.

In fact, most of Marsh’s paper points, whether intentionally or not, to the inadequacy of a model of humanitarian disarmament rooted only in the landmines and cluster munitions bans. These treaties do a great deal to reduce civilian suffering in war and its aftermath, but their restriction on the freedom of action of the major powers is relatively marginal – and those same powers nevertheless have largely refused to sign on, although their actions are to some extent inhibited by the norms that have been established.

In the case of autonomous weapons, we are seeking to call off the next great arms race.

Autonomous weapons are potentially the most important and foundation-shaking development in military and global security affairs in decades. They are not merely a threat to human security from their potential for indiscriminate slaughter, but are even a threat to international peace and security.

Therefore, we must address the issue in its widest scope, and on the basis of the strongest possible foundations: the principles of human control, responsibility, dignity, and sovereignty, and that conflict is human – and must not become autonomous.

7 thoughts on “Autonomy without Mystery: Where do you draw the line?

  1. This is an interesting and useful document and clearly I agree with much (but not all) of what you say as we are on the same side.

    However there appears to some misunderstanding of the purpose of my paper, ‘Towards a principle of human control’ http://bitly.com/QqvUoi [link in post fixed – thanks -MG] which I would like to point out here as I do not believe that it falls foul of your position. I would like to have the opportunity to discuss this with you face to face but since it is now in the public domain, I will explain what was intended. It is a little more nuanced in the limited space that you were able to give to it here.

    We both agree with your point: “Given any configuration of technical systems and human beings, we need to be able to decide whether this principle is satisfied, or not.” And this is precisely what the paper is about.

    The misunderstanding appears to be that the paper is about ‘levels of meaningful control’. That is not the case. On p15. ‘Thus the main aim of this article was to pull apart and examine the minimum necessary conditions for the notion of meaningful control.’ Note that I do not say that these are ‘sufficient conditions’.

    The point is not to offer a definitive set of levels but rather to use current military methods of supervisory weapons control to probe the US and UK about what they mean by ‘appropriate levels of control’. We don’t want to simply accept the binary notion of control or no control. It is not just a matter of having someone stuck somewhere in the control loop and call it human control.

    This is an attempt (wearing my psychologists hat to unravel some of what “meaningful control” might mean by utilizing both the psychological literature on human reasoning and the literature on human supervisory control.

    So the objective was to set up some standards for control and then see how the different levels of weapons supervision used by the military meet those standards (it has nothing to do with levels of meaning).

    As a first step this is the minimum standard that I offer:

    “it is critically important to adhere to strict requirements of deliberative human control as described in the previous section: a human commander (or operator) must have full contextual and situational awareness of the target area at the time of a specific attack and be able to perceive and react to any change or unanticipated situations that may have arisen since planning the attack. There must be active cognitive participation in the attack and sufficient time for deliberation on the nature of the target, its significance in terms of the necessity and appropriateness of attack, and likely incidental and possible accidental effects of the attack. There must also be a means for the rapid suspension or abortion of the attack.“

    I then examine military supervisory control methods against this standard and determine if they are up to scratch. That is the point of the work. Again I will not insist that this standard is absolute – it has been reviewed by a number of people and discussed at several talks. So it will be undergoing refinement and change. However I would argue that it is the minimum necessary standard. You may not agree with it but we have to start discussion somewhere.

    A secondary aim of the paper was to help less technical people come to grips with what is meant by autonomy in weapons control. People at the conferences I attend often get very confused about levels of autonomy (nothing to do with me) in their discussions. This might be the part that you picked up on about levels in the paper.

    Page 5: “There has been considerable confusion over the terms “autonomous” and “semi-autonomous” in discussions about weapons systems. Outside of technical robotics circles it often leads to discussions about self-governing and free will. This confusion is not helped by the many classification systems developed in an attempt to define autonomy and semi-autonomy in terms of different levels. For example the US Navy used three levels while the US Army used ten. This could be very confusing for a military commander having to work with several systems at different levels.

    A reframing of autonomous/semi-autonomous operation is proposed here in terms of levels of human supervisory control. This gets around the jargon and makes clear the important aspect of who is in control and how. This makes the command and control of computerised weapons systems transparent and maintains clear accountability.”

    I would like to suggest that we are both on the same page here Mark we are just taking different approaches to the problem.

  2. Thanks for your comments, Noel. Of course we are on the same side but only have slightly different approaches. I think that your minimum standards for human control to be meaningful are valid, and well-grounded in psychology and experience with supervision of automated systems. However, I would prefer to avoid complicating the issue of human control by introducting qualifiers like “meaningful.” If I read some ad copy for MBDA telling me that their interface for Brimstone ensures “meaningful human control,” I would immediately suspect the opposite… i.e. that “meaningful” implies a lower standard than absolute human control.

    Apart from prescriptive standards like the ones you propose, which are useful but always necessarily inexact and subject to further questions and hair-splitting, I proposed that the principle of human control invokes a parallel principle of human responsibility, and that the standard for human control should be the willingness of commanders and operators to accept personal responsibility and accountability.

    This is also the “no-accidents principle” that I have suggested. If the operator does not believe the system gives him control over its actions, it becomes the responsibility of the operator to refuse to operate the system as a weapon. If the commander does not believe she has sufficient information, independent of machine judgment, to make a judgment for the use of force, it is her responsibility to refuse to issue an engagement order. It is not acceptable to blame inadequacies of the machine, or say that the machine indicated a valid target or that the interface was confusing or too sensitive, causing an unintended fire event.

    I think this solution avoids ambiguity in its statement and makes sufficient demands for accountability while still allowing enough flexibility to be implementable in practice.

    • Yes you a lot more high-level which is complimentary and useful in its way. But working at the coal face, the military need guidance about what kind of human supervision is necessary to keep within IHL and also we need to help in legal weapons reviews.

      I like the term meaningful human control in the way that our campaign NGO Article 36 introduced it. It has now become a widespread term among national delegations and in the legal sphere with folks such as Christof Heyns. It implies that we need to look a lot more carefully at statements saying that there is a human in control of a weapon. Is it in a way that allow for appropriate levels of human psychological processing or it just some making fast decisions. There are important factors when you get down to negotiations.

      • You may recall, Noel, that my original title for what became the Berlin statement was “The Principle of Human Control over Weapons and All Technology.” That got stripped off from the statement as adopted by the meeting but I later wrote about the principles of human control and human dignity in my 2012 essay “The Principle of Humanity in Conflict.” I recalled this as I was listening to the closing statements on Friday; one delegation after another spoke of “the principle of human control” although some also used the phrase “meaningful human control.” I did a quick Google search on “principle of human control” restricting the dates to 2013 and earlier. You might try this yourself. It’s also interesting to try “human dignity” + “autonomous weapons”.

  3. Mark,
    I appreciate your thoughtful comments on autonomy and meaningful human control, and it is clear that you have given this significant consideration. I believe that the issue you are addressing is precisely the core of the question — when has a human made a decision to select a particular target for engagement, and when has the machine made that decision?

    However, I think that your analysis of fire-and-forget lock-on-after-launch weapons is based on a mistaken understanding of how the weapons are used. If such weapons were used in a blind fashion — that is, fired over the horizon into an area where enemy forces were suspected, but human operators were not aware of specific forces — then you would be correct. They would be used in an autonomous fashion. The machines would then be selecting their own targets and they would be de facto autonomous weapons. That is not, however, how they are used. In fact, most such weapons have narrow fields of view for their seekers and short times of flight, so if they were used in that manner — blindly launched into the general area of possible targets — they probably would be wasted weapons. There are obviously strong operational reasons for militaries not to do that.

    In fact, homing munitions are used to home in on targets that have been chosen by humans. Homing munitions have been in wide use since World War II. Sometimes these weapons lock on to a target before launch. In such cases, a potential target is identified by a ship’s, aircraft’s, or submarine’s sensors (say, a radar), a person selects the target to be engaged, and then that target data is passed to the munition (a missile or torpedo), which then tracks after that target.

    In other cases, however, the weapon acquires the target after launch. In such cases — lock-on-after-launch homing munitions — the ship’s, aircraft’s, or submarine’s sensors identify a target, a person decides which target he or she wants to engage, and then the person launches the homing munition in the direction of that target. The munition itself does not have data on the specific target at the time of launch. It simply knows to maneuver to a particular area and then activate its seeker. The way that the human operator ensures that the munition engages the correct target and not an incorrect one is by ensuring that the weapon is not launched into an area where there are other, potentially conflicting, targets that the person does not want to engage. This is what is meant by “tactics, techniques, and procedures (TTPs).”

    Let me give an example of how this might work. A pilot might get a radar indicator of a hostile enemy aircraft. (Note that if this is beyond visual range, then this is information conveyed by a machine.) The pilot makes a determination whether it is appropriate to engage this particular aircraft, or group of aircraft. If the pilot decides that it is appropriate to engage them, then the pilot launches an air-to-air homing munition at the enemy aircraft. It is not necessary for the specific targeting data on the enemy aircraft to be passed to the missile. If the pilot knows, based on an understanding of the missile’s functionality, that the missile will fly to a point in space and then activate its seeker and, as a result, will engage the enemy aircraft and not other, undesired, targets, then that is sufficient. The weapon will engage the target, or targets, the pilot has selected.

    When we look at this example, let’s ask: Who made the decision to engage the enemy aircraft? It is clear that the pilot made that decision. The pilot knew which enemy aircraft he or she wanted destroyed, and sent the missile to do that. I would call that a semi-autonomous weapon system, because the decision to select and engage a particular target, or group of targets, was made by a person.

    Let’s contrast that with an example of a weapon that selects its own particular targets, the Israeli Harpy. The Harpy is a loitering wide-area anti-radiation munition. It is launched in the general area where enemy radars are believed to be and it flies a search pattern over a wide area. When it encounters an enemy radar, it then dive-bombs into the radar and destroys it. In this case, the person launching the weapon does not know what specific enemy radars will be engaged. If he or she did, then they could scope the weapon’s field of view more narrowly to target that specific radar. But radars are often mobile and use clever tactics like turning their radars on only intermittently, so a system like this is used to cover a wide area where the human believes there probably are enemy radars, but does not have information about specific radars to be targeted.

    In this example, the decision that the human makes in launching the weapon is quite different from the example of the lock-on-after-launch homing munition. In the air-to-air example, the pilot knew which specific enemy aircraft he or she wanted to engage. The homing munition was then used to engage that aircraft. In the case of the Harpy, the human operator does not know which specific radar he or she wants to engage. The human operator merely knows that he or she wants to engage any enemy radars within this general area. The machine is deciding which specific radars to engage.

    What this suggests is that focusing solely on the technology itself without an understanding of how it will be used can lead to misleading conclusions. Which makes sense. If we want to understand what is meaningful human control, we need to look at the role of the human, not just the machine. We must ask, “What decision is a person making if he or she employs this weapon?” The decision a human makes in employing a semi-autonomous vs. an autonomous weapon are different. When a person employs a semi-autonomous weapon system, he or she knows which particular target or specific group of targets he or she intends to engage. When a person employs an autonomous weapon system, he or she knows that some targets of a certain type will be engaged, but not which particular target or group of targets.

    As we grapple with these terms, I think it’s worth taking a step back to consider what is motivating this debate. I am currently at the UN CCW discussions as I type this. There is a general understanding that concerns about autonomous weapons, or “killer robots,” are motivated by potential future weapons, not what exists today. Now, it is entirely conceivable as we move forward in this discussion that there may be isolated systems that exist today that fall under whatever definition the international community eventually arrives at for an autonomous weapon. As I mentioned above, I think that the Israeli Harpy system today meets that definition, as would the original concept for the LOCAAS. However, if we come up with a definition that captures a large number of weapons that are widely used by militaries around the globe, that have been since World War II, and that have not heretofore been considered objectionable, I think we ought to take a step back and consider whether we have gotten it right. Lock-on-after-launch homing munitions like air-to-air missiles or torpedoes have been in wide use for nearly a century and have not prompted concerns about the legality, ethics, or morality of their use. I think it is worth asking what is different about autonomous weapons, or “killer robots,” as we envision them from these existing weapons. I think there are meaningful differences. I believe that a weapon that selects its own targets for engagement is significantly different than a homing munition that is designed to go after a target that has been chosen by a person, but simply locks on that target after launch. The difference is in the type of decision made by the human, and whether the decision to engage a particular target was made by a human or a machine. And that, I believe, is the essence of the issue.

  4. Paul, thanks for your comments. As I wrote in the post, your work on DoDD 3000.09 produced what I believe is a remarkably clear foundation. So I wonder why you would be unwilling to pursue your own logic to its clear conclusions, as I have done here.

    You now argue that a weapon that is designed to search within a limited space-time domain and to autonomously determine that a certain object is to be engaged, based on a sensor signature, is not an autonomous weapon, even though it makes that determination autonomously. This requires you to argue that there is some clear distinction between a wide-area search munition and one that has a more limited search area. Is the distinction you have in mind the size of the search area? What space-time size limits would you propose? What would be the logic of such a definition?

    Alternatively, you seem to want to argue that if a particular object is the intended target, chosen by a human, and the weapon only seeks and finds that particular target, then it was the human who determined that the target was engaged, and not the machine’s decision that its sensor data matched the signature of the intended target. Is there any limit on the complexity of such a search before it becomes an autonomous weapon? Can it be a robot that searches a city for a particular person, then kills her?

    I propose a simpler approach: Take your definition at face value. Target selection occurs at the final decision point, even if there were earlier ones (such as the receipt of intelligence reports, the compilation of potential target lists, etc.). The class of autonomous weapons is then very large, but importantly, it encompasses everything that we must prohibit. It may also encompass some things that we may prefer not to prohibit. Under a categorical autonomous weapons ban, specific exceptions, guided by certain principles, can be enumerated.

    So, some types of fire-and-forget missiles might be exempted from an autonomous weapons ban. These types can be identified either by specific enumeration or by generic descriptions of their characteristics, or both. This approach has the advantage of conceptual clarity, negotiability, and capture of things not described or enumerated. It allows us to draw a clear line and place a high bar in the path of the arms race.

  5. Pingback: Banning Lethal Autonomous Weapon Systems (LAWS): The way forward | ICRAC

Leave a Reply

Your email address will not be published. Required fields are marked *