Humans Making Decisions

The Future of Life Institute has posted on its website an “Open Letter from AI & Robotics Researchers” calling for “a ban on offensive autonomous weapons beyond meaningful human control.” It’s generally an excellent letter. I’ve added my signature, and urge you to add your own to the list that is already hundreds long and growing rapidly – you don’t have to identify as an “AI & robotics researcher” although most of the signatories do.

The wording of the letter does raise some questions that its authors may have found convenient to avoid, but which must be addressed as we move toward the realization of a ban on autonomous weapon systems (AWS). To define the issue it is addressing, the letter states that:

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions.

The first part of this is adapted from the language of Department of Defense Directive 3000.09, which, as I’ve written here and in the Bulletin of the Atomic Scientists, hides the entire issue of what constitutes human control behind the ambiguity of its non-definition of “target selection.” The second part brackets this ambiguity but is somewhat ambiguous itself. Some of the questions that remain unanswered here include:

  • Does the category of autonomous weapons always include systems that “search for and eliminate people,” or only if “certain pre-defined criteria” and not other kinds of criteria are used? In particular, if the criterion was identity, i.e. that the person was a known individual that humans had already made the decision to kill, and if a robot could identify that person with high reliability, should that be allowed? What if the target was not a person, but a particular object, or any object of a certain kind?
  • Does the category of “cruise missiles” include weapons that autonomously hunt targets within some area? Is it allowable to strike military targets, which may have people in or around them, on the basis of “certain pre-defined criteria”?
  • What do we mean by “humans make all targeting decisions”?

According to the US policy, in the interpretation of its author Paul Scharre and his coauthor Michael Horowitz, if a human has decided that a target should be attacked, it is acceptable to send a machine to hunt, identify and attack that target fully autonomously. The policy calls such weapons “semi-autonomous.” Inevitably, the systems must make their identification decisions on the basis of applying certain pre-defined criteria to information derived from sensor data.

What if humans have decided that any tank, ship or plane of a certain type, found within certain geographical boundaries, is an enemy target and should be attacked? Scharre has told me privately that he does not believe the policy allows this, but it explicitly allows “target groups” to be “selected” by humans and their engagement to be delegated to machines. In practice, since particular objects can only be recognized by their physical characteristics, if there is more than one of a type in an area – as will inevitably occur in war – the use of such weapons inevitably entails their selecting one object or another for engagement.

Thus, it is not enough to distinguish autonomous weapons from those for which “humans make all targeting decisions” without engaging what this means in detail. Leaving this question unaddressed risks being presented with an “autonomous weapons ban” or “meaningful human control” law that is effectively meaningless because it allows killer robots to make every kind of tactical targeting and engagement decision provided that, under some doctrine, the decisions the robots make are the ones humans intended they should make.

To accept this would be in effect to say that killer robots are just fine, as long as they work as intended. To put this another way, the definition of a “semi-autonomous weapon system” appears to be “a fully autonomous weapon system that hasn’t malfunctioned yet.”

One high ranking activist has told me that it’s not the role of campaigners to stake out the details of an agreement, because the governments will do this and negotiate among themselves. I question this: shouldn’t civil society be alert to the danger of solutions that only paper-over the issues we’re concerned about, and shouldn’t we be clear with governments about what we want and what we would find inadequate or unacceptable? And isn’t the time to start being clear about this right at the beginning, when the issue is being framed in people’s minds and in the language that will later be used to discuss solutions?

In a recent article in The National Interest, Scharre and Horowitz pressed for surrender on the issue they have defined as one of “advanced precision-guided munitions” or, in the language of DoDD 3000.09, “fire-and-forget or lock-on-after-launch homing munitions” which the policy classifies as “semi-autonomous”:

Ban proponents should also clarify whether they see automated air and missile defense systems and fire and forget homing munitions, both of which have been used for decades by dozens of militaries, as problematic and targets of a ban.

Since the Campaign to Stop Killer Robots has identified its objective as a ban on “fully autonomous weapons,” it might seem logical for the Campaign to say it’s not concerned with “semi-autonomous” weapon systems. But this would be allowing Scharre and the Pentagon to define the terms used and the boundaries of our concern.

Rather, we should reject any lexicon in which systems that operate and make lethal decisions fully autonomously can be classified as less than fully autonomous weapon systems. And we should clarify that the decision to identify a bunch of sensor data as representing a particular object that is intended as a target is a lethal decision and one for which a human being must be held accountable.

The presumption must be that any systems that make such decisions are to be prohibited unless explicitly permitted (under stringently defined conditions). Permitted systems might include “grandfathered” fire-and-forget missiles of very limited capability, or purely defensive interception systems against uninhabited incoming munitions, provided such systems operate under accountable human supervision. As I’ve written here, this is a level of detail that will have to be negotiated, but civil society needs to be a part of that negotiation, and needs to be alert and clear about not giving too much away.

Again, the presumption should be to ban every kind of lethal decision by machines, and require accountable human control at every significant decision point. Any exceptions would be concessions for the sake of getting an agreement; but to concede that machine decisions are always OK if they correspond with human intentions (some go so far as to suggest human desires or even human values as interpreted by machines) would be to concede everything of principle, and reduce the issue to an engineering problem, one whose solubility would imply the acceptability of a future in which conflict and war could be up to 100% fully automated and entirely out of human control.

ver. 1.2

6 thoughts on “Humans Making Decisions

  1. [comment edited for civility]
    I am unclear about what your problem is with a weapon hitting the intended target.Surely a weapon that can adjust its course to hit a chosen target is better than one that stays on course and hits an arbitrary target..

    You seem to be erecting a logical fallacy here based on a misunderstanding of the notion of autonomy in robot systems. It is erroneous to suggest that such a weapon making a lethal “decision” since the targeting decision has been made prior to launch by humans.

    I am more convinced by the case about lethal decisions being delegated to machines after launch. To say that a semi-autonomous weapon is one that has not malfunctioned yet seems to miss the point that a fully autonomous weapon is not directed to specific target but allowed free reign to find a target.

    I agree that such weapons should be banned even it they have hit legitimate targets. The only way that these could be termed semi-autonomous would be if the users were lying.

    This is an entirely different problem to the one that you are discussion here.

    From my understanding nations will devise their own definitions and treaties regardless of what you say or write. It is then up to civil society to push for change in the definitions until they achieve what they want.
    .

    • The problems that you seem not to see are lurking in your own words: “the intended target”…. “a chosen target”…. “the targeting decision”…. “specific target” vs. “free reign to find a target.”

      I am talking about weapons that are given free rein to hunt and find a target, operating fully autnomously and outside of human supervision or control. Under the US policy directive, such weapons may be considered “semi-autonomous” and these include a number of systems that already exist as well as others in development in the US and other nations.

      From the point of view of the weapon, or if you prefer, of the designers of the weapon, it is irrelevant whether the object that the weapon is seeking has been chosen by a human. The weapon must operate autonomously, and must make the decision that it has found the correct target, or a correct target, without further human intervention. It will only be able to make this decision on the basis of criteria such as the general location, physical description, and possibly the behavior of the target, as estimated by onboard computers on the basis of sensor data, i.e., all the same kinds of criteria that a “fully autonomous” weapon would use to decide which target(s) to attack.

      I agree that it is hard to object on pure civilian protection grounds to making weapons less likely to stray and hit a wrong target because they can recognize and home on their intended targets. However, if there is a meaningful distinction to be made between such homing munitions and what we would call “fully autonomous” weapons, the US policy fails to make that distinction. Its “semi-autonomous” rubric permits everything that anyone wants to do today and opens the door to everything we are worried about in the future.

      I am calling attention to that and calling for autonomous weapons opponents to be alert and engaged on this issue so that the US and other governments can’t get away with declaring that they’ve banned killer robots or required human control while actually legitimizing killer robots and redefining human control to something that does not stand in their way.

      Some people may be OK with fully autonomous weapons that select their own targets, as long as those targets are the ones we wanted. Since in practice some error rate will be tolerated, this reduces the entire issue to an engineering problem and a question of what error rate we consider tolerable. In the long run, this will lead to full automation and loss of human control in war and possibly the initiation of war.

      I don’t agree that it would be best to ignore these problems until after a treaty has been reached which sucks the oxygen out of the issue and let’s people pretend the problem has been solved.

  2. [comment edited for civility]
    Let us get down to what you mean by weapons making decision to kill.

    If I send you to get some shopping from a mobile grocery cart and it it is not at the location that I sent you to. You notice that is is 5 yards further up the road. Do you do what I set you to do or are you “deciding” to walk 5 yards further up the road to complete your mission?

    In other words what do you mean by a weapon deciding – in the case you cite where a computer program redirects a weapon to the intended target it is deterministic and no decision has been made except in very narrow computer science sense.

    it is not selecting its own target. Someone else is selecting its target.

    As an aside, weapons do not have their own targets. There are inanimate objects and don’t own anything. I think that linguistic precision is key to a coherent discussion.

    • Humans make a lot of decisions, big ones and small ones, in the course of every activity. Robots will have to do the same. The question here is which decisions do we want to keep under human control.

      To extend your own metaphor, suppose I don’t find the grocery cart where I was told it would be, so I look around, and see a grocery cart nearby. If it matters whether I buy from one cart or another (surely it does matter whether I blow up one ship or another), I have to decide whether this is the cart I was sent to, which has simply moved, or whether it is another one. If my boss is very particular about which cart has the best goods, I might lose my job for making the wrong judgment.

      I don’t know what you mean by a “narrow computer science sense” of making a decision. There is an actual decision being made. In war, that means a lethal decision.

      Moreover, the use of autonomous weapons in war implies that AWS will not just be identifying unique targets but also classes of targets, since they can only identify anything based on sensor data and many things will be indistinguishable, such as all tanks or planes or ships of a certain type. And if there is more than one, the weapon will have to choose or prioritize.

      If we allow this, we are opening the door to every kind of tactical target selection and prioritization by machines. We need to draw a line somewhere, and that requires that attention be paid to the difficulty of this issue. On the other hand, we have people trying to hide that and leave the door open until it’s too late to stop all the things that are already getting out.

      You insist that “someone else is selecting” but you are overlooking the ambiguities involved.

      I agree that we need more precise language, as well as clearer conceptualization of the problem. That’s my whole point.

      • edited for civility; replies in context because commenter insists rather than reflecting on points already made -MG

        There are no ambiguities involved if a weapon is sent to hit x and it hits x.

        The ambiguities are:

      • How does the sender specify x?
      • How does the weapon determine that the sensor data it is receiving correspond to the specification of x?
      • What limits are placed on the answers to these questions?
      • That fits the bill for your semi-autonomous weapon. If it is sent to hit x and it hits y (chooses a different grocery cart), that is either an error or it is an autonomous weapon.

        As my posts have repeatedly argued, I think we ought to assign the status of “autonomous” based on the operational fact that a weapon can act independently of human intervention.

        A narrow CS sense of decision is a decision tree or a group of conditionals that determine the direction a program takes. This is quite different from a human decision which can involve thought and reflection. When we program a computer we embed it with our pre-made decisions. It is not making decisions by simply following paths in its given decision tree.

        Your insistence on denying the use of the word “decision” in the context of machine decision is, in my view, unjustified. But feel free to suggest another word that you would be more comfortable with.

        The problem is with the complexity and what kind of decisions are being offloaded onto the program.

        I agree. That’s what I’m saying: the problem needs to be addressed.

        An autonomous weapon has choices, your version of semi-autonomous does not. What it strikes is entirely determined.

        One could as well argue that any plausible “fully autonomous” weapon is “entirely determined” and by exactly the same combination of factors: its programming, the environment via sensor data, and its internal state including any special instructions, what has previously happened, “machine learning” etc.

        The only problem here would be if you believe that the weapon is not capable of reacquiring its moved target.

        In other words, if the system does what you want, you are OK with it making lethal decisions fully autonomously. Unless you limit the scope of that, you are opening the door to everything. Limiting the scope is not a terribly hard technical problem, but it needs to be done simultaneously in three different contexts: engineering, diplomacy, and public politics. That’s what’s hard.

        I think your comments illustrate that people have a hard time grasping this issue and that what I am saying needs to be said over and over.

  3. I find the issue very easy to grasp. I think that you arguments need a little revision to prevent confusing two different classes of weapon.

    Whether there is clarity about “two different classes” is the issue in dispute here. I think that it is possible to categorize weapons into an unlimited number of different classes; whether that is useful is another matter. If we are to have a treaty prohibiting “autonomous” or “fully autonomous” weapons, a line does need to be drawn between what is to be allowed and what is to be prohibited. The whole point of this discussion is that the terms used to date fail to draw such a line clearly, and in particular the concept of “semi-autonomous” weapons as defined in the US policy allows everything that anyone wants to do in the near future and can be gradually stretched until it allows anything that anyone thinks they mean by “fully autonomous.”

    Several important chunks have been edited out of my responses to enable you to promote you ideas in a lofty way. This is not a discussion. It is a lecture.

    In invite Mr. Crumhorn to submit any comments of substance regarding the issues discussed in my blog. If he feels I have edited out something substantial and on-topic, I invite him to resubmit. However, I do not welcome empty insults, fatuous dismissals of my work nor lectures on my proper role as Mr. Crumhorn sees it. He’s welcome to do that on his own personal blog.

    I also urge Mr. Crumhorn to sign the FLI letter.

Leave a Reply

Your email address will not be published. Required fields are marked *