Don’t Vote for Killer Robots

Don’t vote against them, either. Because actually, you can’t. Autonomous weapons, or killer robots as they are known among the educated, are not an issue in this election.

So it is rather odd that Heather M. Roff, a member of the International Committee for Robot Arms Control, teamed up with noted Future of War pundit Peter W. Singer to write an article that appeared in Wired magazine Sept. 6, arguing that “Hillary Clinton or Donald Trump will need to decide what the US policy on killer robots will be within the first year of their term.”

Neither Hillary Clinton nor Donald Trump (who appears to be toast anyway) [Postscript: Well, you know.]  has, to my knowledge, made any clear statement as to what their policy on the use of artificial intelligence in weapons would be.

Clinton is noted for her hawkish posturing, has strong ties to the tech sector and her administration will likely be as enthusiastic for everything “smart,” from phones to drones, as Obama’s drone-happy smartwarriors are. Trump hopes to “do business” with Vladimir Putin and Xi Jinping but also talks about the importance of “rebuilding our military” and hardly seems likely to embrace a ban on autonomous weapons–even if it were at all likely that he’ll make it past the White House fence ever again. [woops]

The US already has a go-go policy on this, fully committed to developing, acquiring and using “autonomous and semi-autonomous” weapons. The policy is enshrined in Department of Defense Directive 3000.09, “Autonomy in Weapon Systems,” issued November 21, 2012. More recently, the Pentagon has embraced the notion of a “Third Offset Strategy” aimed mainly at Russia and China, and the centerpiece of this technology thrust is the use of artificial intelligence in autonomous fighting systems.

But according to Roff and Singer, the 2012 Directive “has a 5-year limit,” and therefore “terminates in 2017,” creating a “forced deadline” for setting a new policy. That makes the issue seem urgent (it is), and possibly one that people still deciding how to vote in this election should think about (they don’t), or that should have been raised in one of the debates (it wasn’t).

There is just one problem with Roff and Singer’s argument: it is entirely false.

The 2012 Directive does not expire next year, although it is supposed to be reviewed by then. According to a report on “Preparing for the Future of Artificial Intelligence” issued by the White House in October, it is now under review. But if no action is taken, the policy still remains in effect until Nov. 2022. Don’t take my word on this, take it from then-Deputy Secretary of Defense Ashton Carter, just above his signature on the document itself:

In the real world, a policy like this can be reissued, cancelled, certified current, or replaced by a new policy whenever the political stars line up. A full-blown policy review might take some time, but the existing policy does not expire until 2022. The “ticking time bomb” Roff and Singer describe won’t go off at least until then — if indeed there are any dire consequences to not having a national killer robot policy for a time.

Roff and Singer call setting America’s direction on autonomous weapons “perhaps the most important decision [the next president] will make for overall human history.” If so, why should Clinton be urged to slog through all the strategic, technical, ethical and legal issues, complications, ambiguities and conflicting agendas at the heart of the killer robots debate within ten months, and reach a decision that’s likely to stay with us through many more years of rapid technological, military and political change? What’s the rush?

More to the point, why should we assume that such a fundamental and consequential decision is up to the winner of the 2016 presidential election, rather than to all Americans and people all over the world as well?

What bothers me most about the proposed rush to a new policy is that any new “government-wide policy,” as promised by the Obama administration’s “Preparing” report, that is issued within the next year is unlikely to shift substantially or in a positive direction from the position of the 2012 Directive. Any new policy is likely to be just an affirmation of the existing one.

The American people, although opposed to autonomous weapons according to the only scientific poll that has been taken, are still not significantly engaged with this issue — as indicated by the fact that it has not come up in the election. We need to have a national as well as global debate on this, and thus far it has hardly even been getting warmed up.

Opinion leaders in the national security sphere, by and large, seem to support the current approach, which is strongly committed to the development, acquisition and use of “autonomous and semi-autonomous weapons” despite widespread unease, including within the military, about what Air Force Gen. Paul Selva recently called the “Terminator conundrum” of death machines “without a conscience.”

Military leaders routinely offer vaguely-worded assurances that humans will remain in control somehow. Secretary of Defense Carter recently asserted that “there will never be true autonomy” and “there’s always going to human judgment and discretion,” but went on to clarify that what he meant was that humans would “give orders and instructions such that everything that is done, is done in a way that is compatible with the laws of armed conflict” while the killer robots would, in fact, be operating fully autonomously under those orders and instructions so they can “use the information on site to have the best effect.”

Despite what you may have read elsewhere, Directive 3000.09 does not ban or impose a moratorium on lethal, fully autonomous weapon systems; it merely requires senior-level certification that such systems meet a list of criteria that you might hope any weapon system would comply with. The policy mandates “appropriate levels of human judgment” but makes it clear that this does not exclude the use of autonomous weapons with the ability to autonomously choose and attack new targets, including humans to be killed.

Perhaps most significantly, the policy green-lights (no senior-level certification needed) what it calls “semi-autonomous weapon systems.” These include systems that would be given criteria for identifying targets that ostensibly have been “selected” by humans. The robots would then be dispatched to hunt, find, identify and attack such targets fully autonomously. What more could anyone want a killer robot to do?

I fully agree with Roff and Singer that America needs a new killer robot policy, but I’m not sure even they agree about what it should be. I certainly don’t agree with some of their suggestions.

I don’t want the US to “try to build consensus among its partners and allies about what shared policies in this area ought to be,” given what the US’s own current policy is. I would not suggest that “nations should agree on clear policies for [autonomous weapons] development, use and regulation” if they “are indeed going to push forward with this technology.” Rather, I think any new US policy should call for an “outright ban.”

I don’t think it would be okay to “release” killer robots once “a conflict has started.” I don’t think autonomous weapons that target “large-scale platforms like a tank or submarine” are okay; rather, they are probably even more dangerous than any that would target “individual humans” precisely because they are very much “related to nuclear weapons” and to confrontation between the nuclear powers, where indeed, “the risk for escalation is high.”

Roff and Singer are entitled to disagree with me on these points, but not to invent the fiction that the current policy, egregious as it may be, will expire next year and that a rush to write a new policy is therefore required. It isn’t, and that is a fact.

In my opinion, it is better to allow the discussion and debate to continue. A reaffirmation that the United States intends to follow the course indicated by the 2012 policy would signal to Russia, China and the rest of the world that they must do the same, and would only accelerate the already dangerously accelerating autonomous weapons arms race.

So, don’t vote for or against killer robots, but do speak up about them.

Leave a Reply

Your email address will not be published. Required fields are marked *