Humans Making Decisions

The Future of Life Institute has posted on its website an “Open Letter from AI & Robotics Researchers” calling for “a ban on offensive autonomous weapons beyond meaningful human control.” It’s generally an excellent letter. I’ve added my signature, and urge you to add your own to the list that is already hundreds long and growing rapidly – you don’t have to identify as an “AI & robotics researcher” although most of the signatories do.

The wording of the letter does raise some questions that its authors may have found convenient to avoid, but which must be addressed as we move toward the realization of a ban on autonomous weapon systems (AWS). To define the issue it is addressing, the letter states that:

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions.

The first part of this is adapted from the language of Department of Defense Directive 3000.09, which, as I’ve written here and in the Bulletin of the Atomic Scientists, hides the entire issue of what constitutes human control behind the ambiguity of its non-definition of “target selection.” The second part brackets this ambiguity but is somewhat ambiguous itself. Some of the questions that remain unanswered here include:

  • Does the category of autonomous weapons always include systems that “search for and eliminate people,” or only if “certain pre-defined criteria” and not other kinds of criteria are used? In particular, if the criterion was identity, i.e. that the person was a known individual that humans had already made the decision to kill, and if a robot could identify that person with high reliability, should that be allowed? What if the target was not a person, but a particular object, or any object of a certain kind?
  • Does the category of “cruise missiles” include weapons that autonomously hunt targets within some area? Is it allowable to strike military targets, which may have people in or around them, on the basis of “certain pre-defined criteria”?
  • What do we mean by “humans make all targeting decisions”?

According to the US policy, in the interpretation of its author Paul Scharre and his coauthor Michael Horowitz, if a human has decided that a target should be attacked, it is acceptable to send a machine to hunt, identify and attack that target fully autonomously. The policy calls such weapons “semi-autonomous.” Inevitably, the systems must make their identification decisions on the basis of applying certain pre-defined criteria to information derived from sensor data.

What if humans have decided that any tank, ship or plane of a certain type, found within certain geographical boundaries, is an enemy target and should be attacked? Scharre has told me privately that he does not believe the policy allows this, but it explicitly allows “target groups” to be “selected” by humans and their engagement to be delegated to machines. In practice, since particular objects can only be recognized by their physical characteristics, if there is more than one of a type in an area – as will inevitably occur in war – the use of such weapons inevitably entails their selecting one object or another for engagement.

Thus, it is not enough to distinguish autonomous weapons from those for which “humans make all targeting decisions” without engaging what this means in detail. Leaving this question unaddressed risks being presented with an “autonomous weapons ban” or “meaningful human control” law that is effectively meaningless because it allows killer robots to make every kind of tactical targeting and engagement decision provided that, under some doctrine, the decisions the robots make are the ones humans intended they should make.

To accept this would be in effect to say that killer robots are just fine, as long as they work as intended. To put this another way, the definition of a “semi-autonomous weapon system” appears to be “a fully autonomous weapon system that hasn’t malfunctioned yet.”

One high ranking activist has told me that it’s not the role of campaigners to stake out the details of an agreement, because the governments will do this and negotiate among themselves. I question this: shouldn’t civil society be alert to the danger of solutions that only paper-over the issues we’re concerned about, and shouldn’t we be clear with governments about what we want and what we would find inadequate or unacceptable? And isn’t the time to start being clear about this right at the beginning, when the issue is being framed in people’s minds and in the language that will later be used to discuss solutions?

In a recent article in The National Interest, Scharre and Horowitz pressed for surrender on the issue they have defined as one of “advanced precision-guided munitions” or, in the language of DoDD 3000.09, “fire-and-forget or lock-on-after-launch homing munitions” which the policy classifies as “semi-autonomous”:

Ban proponents should also clarify whether they see automated air and missile defense systems and fire and forget homing munitions, both of which have been used for decades by dozens of militaries, as problematic and targets of a ban.

Since the Campaign to Stop Killer Robots has identified its objective as a ban on “fully autonomous weapons,” it might seem logical for the Campaign to say it’s not concerned with “semi-autonomous” weapon systems. But this would be allowing Scharre and the Pentagon to define the terms used and the boundaries of our concern.

Rather, we should reject any lexicon in which systems that operate and make lethal decisions fully autonomously can be classified as less than fully autonomous weapon systems. And we should clarify that the decision to identify a bunch of sensor data as representing a particular object that is intended as a target is a lethal decision and one for which a human being must be held accountable.

The presumption must be that any systems that make such decisions are to be prohibited unless explicitly permitted (under stringently defined conditions). Permitted systems might include “grandfathered” fire-and-forget missiles of very limited capability, or purely defensive interception systems against uninhabited incoming munitions, provided such systems operate under accountable human supervision. As I’ve written here, this is a level of detail that will have to be negotiated, but civil society needs to be a part of that negotiation, and needs to be alert and clear about not giving too much away.

Again, the presumption should be to ban every kind of lethal decision by machines, and require accountable human control at every significant decision point. Any exceptions would be concessions for the sake of getting an agreement; but to concede that machine decisions are always OK if they correspond with human intentions (some go so far as to suggest human desires or even human values as interpreted by machines) would be to concede everything of principle, and reduce the issue to an engineering problem, one whose solubility would imply the acceptability of a future in which conflict and war could be up to 100% fully automated and entirely out of human control.

ver. 1.2

Hypersonic Debate!

Over at the Bulletin of the Atomic Scientists, we are on the second round of a three-way debate about the idea of a hypersonic missile test ban, which I first proposed on this blog almost a year ago. I am joined in this debate by Rajaram Nagappa, a genuine rocket scientist, and Tong Zhao, a Chinese policy analyst, neither of whom estimates the test ban to be a likely proposition but neither of whom has, so far, identified any compelling reason why it wouldn’t be a good thing. So far, I’m scoring that as a win.

After my article on the test ban in the Bulletin last September, the following month another author, actually a US Air Force Lieutenant Colonel, posted an article in the US military newspaper Stars and Stripes making much the same proposal. Lt. Col. Schreiner was then on leave as a research fellow at the Stimson Center, and he did make a strong case:

There is a window for action open right now. The U.S. should weigh the long-term strategic advantages of these weapons against the possible risks that they could destabilize the international system and drive the world into another arms race. We hold the overwhelming advantage across the spectrum of military capability. We are in the unique position to sound the call for halting this technology. We have the chance to lead the world out of an arms race instead of into one.

Curiously, the original article is no longer available at Stars and Stripes online (other articles posted the same day are available) but can be found in its print archive and also archived on the Wayback Machine.

Killer Robots and Laser-Guided Bombs: A reply to Horowitz & Scharre

MarkoffMissilesStoryFollowing the New York Times’s front-page publication of an article by John Markoff about the concerns I and others have raised about existing and emerging missile systems that search for, identify and attack targets fully autonomously, Michael Horowitz and Paul Scharre weighed in with a piece in accusing myself (by name) and other unnamed “activists” of “shifting” the issue of “lethal autonomous weapon systems (LAWS)” (the rubric used by the Convention on Certain Conventional Weapons – CCW) so as to include “precision guided munitions” (PGMs) which, they argue, “have saved countless civilian lives over the last generation”.

I have much to say about the rhetoric used by Horowitz and Scharre, but first let me cut to the heart of the matter. The weapon systems at issue here are not the laser-guided, television-guided or GPS-guided bombs and missiles that have been in use for decades. I was specifically quoted in the Times article talking about the Long-Range Anti-Ship Missile (LRASM), currently in advanced development by Lockheed-Martin for DARPA and the US Navy. Other systems mentioned both in Markoff’s article and in Horowitz and Scharre’s include Kongsberg’s Joint Strike Missile (Norway), a variant, still under development, of its Naval Strike Strike Missile which is just entering service, and MBDA’s Brimstone missile (UK), which was first introduced in 2005.

What these missiles have in common is that, while they are launched in the general direction of their intended targets, and told what the targets should look like to their various sensors, in most cases the targets are out of range of the sensors at the time of launch, so the missiles are launched without being “locked on.” Although in some cases the missiles may receive updates in flight, and Brimstone, for example, has the capability to be laser-guided to the target by a (presumably human) spotter, all these systems also have the capability to autonomously search for targets whose sensor signatures match those in the missiles’ onboard computer databases, and for the computers to decide that what the sensors are seeing is in fact a legitimate target, and to attack it without further human intervention.

Horowitz and Scharre call these systems “next-generation precision-guided weapons,” and argue that they are “not the self-targeting ‘killer robots’ of our nightmares.” Yet Department of Defense Directive 3000.09, “Autonomy in Weapon Systems,” — the United States’ de facto policy for killer robots, of which Scharre is reputed to be a principal author — refers to such systems as “’Fire and forget’ or lock-on-after-launch homing munitions”, and classifies them as “semi-autonomous weapon systems” which are governed by the Directive. As well they should be, since the definitions given in the Directive are sufficiently broad that “semi-autonomous” weapons could include anything from the Brimstone missile to — as I’ll explain below — The Terminator himself (or, itself).

And what is the policy? According to Directive 3000.09, provided semi-autonomous weapon systems meet certain unremarkable criteria, which one might expect would apply to any weapon system (thorough testing, security measures, appropriate doctrines for use, etc. – plus, clearance by the Pentagon’s lawyers), they are fully green-lighted for immediate development, acquisition and use.

Note that the category of fire-and forget or lock-on-after-launch homing munitions does not include laser-guided or television-guided munitions that home on a target that is held continuously by a human operator, nor does it include GPS-guided munitions which simply go to a given location and explode. Those weapons are not governed by Directive 3000.09. But in response to questions by Markoff, the Navy confirmed that it does classify LRASM as “semi-autonomous” under the Directive.

When is a duck?

Equivocation is the ancient rhetorical strategy of exploiting ambiguities in the meaning of words and terms, or equivalently, vagueness in the drawing of boundaries around categories, in order to confuse and confound thinking. Horowitz and Scharre accuse me of “Defining the precision-guided munitions used by countries around the world as the same thing as the Terminator-style robots of the movies”. But I am doing no such thing. Rather, it is they who are trying to equivocate “semi-autonomous weapon systems,” as defined (in Scharre’s own work) by the Pentagon – and thus implicitly recognized as a departure into autonomy – with legacy PGMs that are not generally described as any way autonomous.

According to Horowitz and Scharre, we learn little from the fact that the makers of these emerging and future missiles describe them as autonomous, since “defense contractors love to pepper the descriptions of their weapons with the word ‘autonomous,’ emphasizing their advanced features… .” But “autonomous” isn’t just a buzzword, it is commonly understood to mean that a system operates without human or other outside intervention; moreover, since at least the 2012 release of Directive 3000.09, if a weapon system is described as autonomous, that is generally understood as referring to autonomy in the process of making the kill, and not some ancillary function. Interestingly, LockMart has updated their LRASM home page, which used to describe it as “autonomous,” so that it now says “semi-autonomous.” Apparently, they believe the words mean something.

But this is indeed the point at which the terms in use, and the Pentagon’s definitions of them, become slippery. Horowitz and Scharre argue that these missiles aren’t truly autonomous, since “a person selects the targets they are engaging.” At issue here is not so much the meaning of “autonomous” as it is the word “select.” (“Engage” is straightforward; it means actually attempt to hit a particular target.)

Directive 3000.09 defines “target selection” as “The determination that an individual target or a specific group of targets is to be engaged.” That’s all it says, and that leaves a great deal unexplained.

Does “the determination” refer to a policy decision, an intelligence finding, a decision at the level of a tactical commander, or is it the action of a weapon system operator in giving commands to a machine? Does the mere loading of certain criteria for target identification fully determine that a particular target, and no other, will be engaged, or does the machine still need to make the decision that it has found a target which matches those criteria?

What does “specific group of targets” mean? Could we, for example, task a salvo of missiles to seek and destroy any T-62 tanks found within certain boundaries? Or task a robot to kill any of a list of known enemy combatants? Or an individual targeted person?

It isn’t even clear that “target selection” could not be defined in terms of behavioral criteria, such as “the sniper who’s firing on us.” Or how about “any person observed to be firing a weapon toward friendly forces”? Could this be interpreted as constituting “a specific group of targets”? Perhaps not, but Directive 3000.09 does not explicitly exclude this. It matters tremendously, because unless a line is drawn somewhere, it remains unclear how “semi-autonomous weapon systems” differ from anything that we would call fully autonomous.

To put it colorfully, The Terminator, with its mission to find and kill a specific person, could then be considered a semi-autonomous weapon system, and even the mayhem it caused along the way, and the killing of innocent people named Sarah Connor, could be excused as acceptable collateral damage in the service of a paramount military objective (prevent Sarah Connor from giving birth to the Resistance leader). This is pop culture and sci-fi, of course, but the point is, Where will we draw the line, once we have decided to let machines decide on their own that they’ve found valid targets, and attack them?

For Horowitz and Scharre, a weapon that operates autonomously remains “semi-autonomous” as long as “a person selects the targets.” Yet this leaves it to a machine to decide when it has found and identified a target that a person has selected (whatever that means). Does the mere fact that some person has in their mind an intention, that certain targets should be attacked, invalidate all of our concerns about letting machines make tactical decisions, including the decision to engage this particular target now?

Probability of atrocity

Directive 3000.09 attempts to preempt this issue by saying that the “lock-on-after-launch homing munitions” to be considered as “semi-autonomous weapon systems” are those that “rely on TTPs to maximize the probability that the only targets within the seeker’s acquisition basket when the seeker activates are those individual targets or specific target groups that have been selected by a human operator.” TTPs are “tactics, techniques and procedures,” micro-doctrines for the use of particular weapon systems. This is saying, in other words, that if a missile’s seeker might not be able to distinguish a military aircraft from a civilian airliner, then whoever decides to launch the missile is supposed to follow procedures that will ensure that there won’t be any airliners near where the missile is being sent to look for military targets – or at least “minimize the probability.”

This is more or less how things are supposed to work today, but as shown by the recent case of Malaysian Air 17, and the more distant memory of Iran Air 655, behavior in combat often fails to conform to doctrine. These tragedies were caused by human error, perhaps, but human error in the use of “semi-autonomous weapon systems” where, indeed, engagement of the target was intended by human operators, who made their decisions from long range, with insufficient information, and tasked their robots to carry them out.

This prompts a reality check on claims that PGMs “save lives” – and that killer robots will. The first question that should be asked is, “Compared with what?” Yes, high-altitude bombing of military facilities sited in urban areas is likely to kill many times more people if carried out with unguided bombs than if carried out with precision-guided weapons. But are you more likely to attempt such an attack, and perhaps kill some civilians, if you have PGMs? If you do not, are you more likely to judge it not worth the risk?

The United States has carried out more than 500 non-battlefield targeted killing drone strikes, killing at least hundreds of civilians (and injuring thousands) in the pursuit of a relatively small number, dozens, of intended targets. Would we do this if we did not have drones? Does anybody believe we would instead be carpet-bombing northern Pakistan, Yemen and Somalia?

What will be the actual impact of autonomous weapons on civilians, as well as soldiers and global security itself? Glib comparisons of the accuracy of guided vs. unguided weapons do not answer these questions.

Horowitz and Scharre note that “Human Rights Watch recently asserted that the use of unguided munitions in populated areas violates international law.” But you know, HRW did not say that populated areas should be attacked with guided weapons. Moreover, apart from the absurdity of the idea that any weapon “saves lives,” we have very good reasons to think that autonomous weapons pose a threat to humanity itself. We are at the cusp of a new global strategic arms race, a quarter century after finding, with much surprise, that we had survived the last one. Banning killer robots is not only a matter of human rights and humanitarian law. It is also a critical objective for arms control.

Join us now, on a journey of discovery and destruction…

Consider what LRASM is supposed to do. Details are classified, but it is believed to use both passive and active radar, plus optical and infrared sensors, and sophisticated signal processing and artificial intelligence technologies to autonomously hunt and kill. A highly entertaining video released by Lockheed Martin portrays a typical engagement sequence.

A “hostile surface action group (SAG)” lies hundreds of kilometers away, and includes both targeted and untargeted ships. The missiles will recognize the targeted ships in part by features of their geometry (which may be reflected both in optical imagery and radar returns) (0:35 – 0:55). The SAG has been spotted by satellites (1:00 – 1:10), drones or some other source of intelligence, but by the time the missiles are close enough to see the SAG, it will have moved so far that the missiles will need to search a wide area and discriminate the SAG from other ships.

Launched from surface ships or aircraft, a salvo of the LRASMs speed toward the SAG (1:20 – 2:20). For a while they are able to receive targeting and guidance updates from both the launching ships and satellites, but then they pass into a “communications and GPS denied environment” (which may be due to destruction of the satellites in an ASAT attack, destruction of the launching ships or other command facilities, or to jamming, or to the need for stealth, or any combination of these factors) (2:35). From this point forward, the missiles are acting fully autonomously, although they can still talk to each other (their highly versatile radio systems integrate radar, jamming and communications). They are following a planned route, but along the way, they may spot unexpected threats, such as another hostile ship or radar signal (in general, you can see active radar before it can see you). They autonomously decide to reroute around the threats (2:50 – 3:20).

As the missiles approach the vicinity of the SAG (3:30), their sensors must scan a wide area of uncertainty (AOU) which will turn out to contain commercial ships as well as the SAG. The missiles proceed through the steps of “AOU Reduction,” “target identification” and “terminal routing,” choosing their own routes to the targets identified by “criteria match” (3:30 – 3:50). By this time, the missiles can be seen by the targeted ship’s own active radar, so as they approach, they will descend to sea-skimming altitude, and apply other “enhanced survivability techniques (4:00 – 4:20). Presumably, by talking to one another, the salvo of missiles can plan a more effective attack, from multiple directions.

Turning on their own active radar (4:23), the missiles bypass untargeted ships while avoiding any threats they might pose. They scan the targeted ships to determine optimal aim points. At the last moment (4:45), the targeted ships’ close-in weapon systems swing into action and attempt to engage the LRASMs, but it’s too late.

Wham. Thunder. Dark skies. Lockheed Martin.

Markoff slightly misquoted me in the Times article, which actually was largely an outgrowth of my discussion of LRASM with him when we met at the DARPA Robotics Challenge in Miami in December last year. Horowitz and Scharre may have been a bit gleeful in re-quoting the resulting somewhat crazy-sounding statement that LRASM represents “pretty sophisticated stuff that I would call artificial intelligence outside human control.” It should have read “operating outside human control,” but either way, I stand by the statement.

No, I don’t believe LRASM is an evil AI bent on exterminating humanity, but it certainly looks to be state-of-the-art technology, and if its “concept of operations” doesn’t qualify it as a fully autonomous weapon system, what exactly would qualify? That is the question I am posing, and I won’t take “something that actually selects its own targets” as an answer, unless you can explain exactly what “selects” means, and how that differs from “interprets a bunch of sensor data in terms of its own database and algorithms, and on the basis of that comes to its own determination that a particular ‘target’—i.e. collection of pixels and pulses— and not that one or the other one over there, is to be engaged.”

Arms races and arms control talks

Of course, it isn’t only the United States that is developing this technology; as previously noted, the UK and Norway also have systems with some of the same capabilities, and so do Russia, China and many other nations have such systems both in use and under development. So-called “beyond visual range” missiles such as the American AMRAAM air-to-air missile and loitering ground attack systems such as the Israeli Harpy anti-radar suicide drone and the cancelled American multirole loitering cruise missiles LOCAAS and NLOS-LAM, and many others, must all be considered in this light.

Should the ongoing LAWS negotiations seek to ban all such “fire-and-forget” or “lock-on-after-launch homing munitions,” or perhaps to regulate and draw some boundaries around them? Horowitz and Scharre warn that “An expansion of the agenda to include precision-guided weapons would most likely end CCW discussions.” But again, they are the ones attempting to conflate autonomous weapons with precision human-guided ones.

If banning killer robots really required rolling back military technology to the 1960s, I’d agree that raising this issue would lead to a collapse of the talks. But failure, or refusal, to address and engage the thorny issues raised by existing and emerging “semi-autonomous weapon systems,” as defined by the Pentagon, is equally likely to lead to such a collapse, because it would leave us without any coherent way to define boundaries around what we want to prevent, other than distinguishing “existing” from “future” weapons; and because, in reality, the action in autonomous weapons development today is almost entirely in the area of systems that plausibly fit under the Pentagon’s definition of “semi-autonomous.” The implicit theory is that tactics, techniques and procedures, combined with the discrimination capabilities of sensors and computers, will suffice to prevent most, if not all, unintended engagements. This is basically how things have been done until now, but it is not an acceptable framework for the future, because it basically allows the development and use of autonomous weapons to proceed indefinitely, without ever bumping into any limits that have yet been written down.

Most of these new weapons are intended for force-on-force combat on and under the high seas, in the air and on battlefields away from civilian populations, to be used at the discretion of commanders who will be tasked to take any residual threats to civilians into account. Nobody had any intention of putting Robocops or Terminators onto urban streets any time in the near future, but the world’s major powers are gearing up in earnest for a new strategic arms race, one in which they will confront each other. LRASM is primarily targeted at China, and probably Russia now that the old Cold War is reputedly back on. The greatest threat to civilians is the same as it’s been for seven decades: nuclear war.

What is at stake in the CCW LAWS talks, and in the entire issue of killer robots (which does not go away if LAWS collapses), is quite simply, everything. That’s big enough stakes to justify a reexamination of trends in military technology that, indeed, have been under way for decades now. Perhaps the solution will be as simple as “grandfathering” everything that already exists. One can lay out broad principles, and then enumerate specific exceptions; it’s always possible to say, “No guns allowed, except for antiques.” Or perhaps, more plausibly, the construction of a robust and enduring arms control regime will require rolling back some things, while allowing others. It wouldn’t be the first time that concluding an arms control agreement has required scrapping some existing weapons.

What I am saying is that this kind of horse trading is going to be unavoidable, and trying to push it aside is a way to steer the discussion toward incoherence, sophistry, and ultimately irrelevance, while the main thrust of the robot arms race surges forward with greater and greater momentum.

US Should Propose a Hypersonic Missile Test Moratorium & Ban

UPDATE: Bill Gertz reported on Dec. 4 that China had conducted “this week” a third test of its WU-14 hypersonic boost-glide vehicle, according to his unnamed military sources. I expect that a test was conducted, although its outcome remains publicly unknown. Gertz quoted Carnegie expert Lora Saalman as saying that a third test (the quote gave no indication that Saalman had independent information about it) showed that the WU-14 “is a priority program for China”; Gertz also quoted China threat monger Richard Fisher as saying the test shows the need for more funding of rail guns, which “offer great potential for early solutions to maneuvering hypersonic weapons.” [I don’t think that makes even a little bit of sense. The problem with shooting at fast objects with fast guns is that they tend to shoot past each other, and the problem with rail guns is that rocket motors are cheap but guidance systems less so. But I digress.] The main point is that if, in fact, China did conduct another test, successful or not, there is nothing the United States can say about it, since we continue to say nothing about a possible test ban. Indeed, the more likely response is to accelerate US hypersonic programs, as advocated by another of Gertz’s sources.

Note: a souped-up and tangy sauced-down version of this post can now be found at Bulletin of the Atomic Scientists. [Just kidding, JM, just kidding.]

Recent Chinese (Aug. 7) and American (Aug. 25) hypersonic missile development tests have highlighted an otherwise little-noted element of the resurgent technological arms race, an element that now involves at least the United States, China, Russia and India, with France and Britain lurking in the wings and no doubt other nations watching closely. The failures of those tests have highlighted also another fact: hypersonic propulsion and flight are difficult technologies involving extreme airspeeds, temperatures, pressures, stresses and combustion rates, combined with the usual requirements for compact airframe construction and low weight. These technical challenges have frustrated hypersonic development programs for decades, and if their solution may now be within reach, using new materials and high-performance computing to solve the exquisite problems of extreme engineering, it remains inconceivable that hypersonic weapons could be developed, perfected and validated for operational use without actual testing.

Two Evil Birds, One Good Stone

A bit of clarification is in order here: hypersonic missiles actually fall into two distinct categories. In boost-glide, the hypersonic weapon is first “boosted” onto a ballistic trajectory, using a conventional rocket. It may cover considerable distance as it flies to high altitude, then falls back to Earth, gaining speed and finally, at some relatively low altitude, pulling into unpowered, aerodynamic horizontal flight. After that, it glides at hypersonic speed toward its final destination. The second category is powered hypersonic cruise missiles, which typically are launched with a small rocket to high speed, and then drop the rocket and ignite a supersonic combustion ram jet, aka scramjet, for powered flight at Mach 5 or greater.

The recent failed Chinese and American tests were of boost-glide systems, while the X-51 WaveRider, which the US successfully tested last year after a string of failures, is an example of the scramjet cruise. The boost-glide test failures were probably caused by issues with the booster rockets rather than with the hypersonic gliders, although system integration can also cause problems. In any case, these systems didn’t work, and that demonstrates that both boost-glide and powered cruise missiles require testing. Such tests are easily observable from space, radar, signals intelligence and old-fashioned spying. A test moratorium would thus throw a huge obstacle in the path of all these programs, and a permanent test ban would  make it clear that they aren’t going anywhere. And that would be a good thing, because where they are going is nowhere good.

It’s not often that one can say an entire technology is evil and should be stopped and banned because it has no positive use. Hypersonic missiles present such a case. There is simply nothing they are likely to be useful for outside of war between major, nuclear-armed powers.  LA to Tokyo in an hour? As unlikely as that is to become technologically possible in the near future, let alone economically justifiable in an era of high-cost energy and low-cost video telepresence, if it ever could make sense it would take the form of a large airplane, not a small missile. Low-cost satellite launches? Hypersonic space planes such as DARPA’s planned XS-1, which would lift rockets to high altitude and initial speeds around 3 km/s, might make sense, but again, to achieve economies of scale, they would tend toward large size; when cost is the driver it would make no sense to build them small.

What’s all this hype about anyway?

Back in the crazy days after 9/11, hypersonic weapons were sold as a form of “conventional prompt global strike” to fulfill the supposed need for a weapon that could be launched from fortress America and strike Osama bin Laden’s lair on the other side of the world in less than an hour. That was an idea so nutty that, unfortunately, few people took the matter seriously, even when it was later revealed that another killer app for prompt global strike was to destroy Chinese anti-satellite weapons before they could be launched. Of course, when the US military finally did get bin Laden, it was a Special Forces raid launched by helicopter from nearby Afghanistan. And it was never very clear why China would be any less offended by our targeting their strategic weapons and facilities, deep inside the Chinese mainland, with hypersonic cruise missiles rather than somewhat faster ballistic missiles. The vague reasoning is that the Chinese (or Russians, in some other scenarios) might mistake ballistic missiles with conventional warheads for those with nuclear ones. However, hypersonic missiles could also carry nuclear warheads, and more to the point, in an attack on China or Russia the likely targets would include Chinese or Russian nuclear weapons, and other systems of strategic importance in a major war between nuclear-armed powers. To imagine that such exchanges could be kept polite by using a fancier, slower, and hardly any stealthier type of missile is the kind of airy fantasy that survives in political discourse precisely because it obviously isn’t serious. But the race to develop and deploy hypersonic weapons is serious.

Although these weapons are slower than ballistic missiles, they are still very fast, and offer a different attack profile, presenting a qualitatively different threat to adversaries. The idea that they might be used to attack a nuclear power, and even its nuclear weapons and related strategic faciliites, because they would be easily distinguished from ballistic missiles, and the enemy might be willing to believe that no hypersonic weapons carried any nuclear warheads, can hardly be expected to be stabilizing. Rather, this theory purports to pose a credible threat of strategic strikes in spite of nuclear deterrence. It must be expected that potential adversaries will seek countermeasures including symmetrical capabilities; indeed, this is apparently just what China and Russia are doing. Hypersonic missiles are not only intended for deep land attack, however. They are also likely to be used at sea, for attacking ships, island bases and shore facilities. Shortening the strike time for naval missile warfare is a recipe for hair-trigger confrontation between major powers contending for regional or global dominance. If there is a way to stop or slow this development, we should take it.

How to stop a speeding hypersonic missile race dead in its tracks

Fortunately, a hypersonic missile test ban would be one of the most rigorously verifiable arms control measures one could think of. It could begin with an informal moratorium, which might be agreed and announced among the major players, and followed up by negotiations for a binding, permanent ban treaty. I would propose a moratorium on tests of any aerodynamic vehicle of less than, say, 10 meters length and 1 meter diameter, traveling in powered or unpowered flight at speeds in excess of 1 km/s over a horizontal distance greater than 100 km. These numbers are somewhat arbitrary and could be fine-tuned or adjusted substantially while preserving the intent of the agreement. However, it is desirable to maintain as wide a margin as possible between what is allowed and what we seek to prevent. The numbers suggested here would just barely permit Russia and India to retain their joint BrahMos 1 supersonic cruise missile, while forcing them to cancel the hypersonic BrahMos 2. The US and China would then be permitted to develop comparable systems, but would have to cancel their hypersonic programs. While an even lower speed limit would be desirable, canceling future programs verifiably via a test ban should be easier to agree on than eliminating existing proven and deployed systems. We should just do it – and later do more.

The United States should take the lead in proposing a hypersonic missile test moratorium and seeking a permanent test ban. Production, stockpiling, deployment, transfer and use should also be prohibited under a permanent treaty, but the test ban is the critical element which makes any such agreement feasible, because it would be reliably verifiable and all but preclude the rest. Nations do not go to war relying on untested weapons, particularly not aggressive war, when they have a choice. Hypersonics are a technology particularly in need of thorough testing both to perfect and to validate weapon systems. Fortunately, it is also a technology, and a new type of weapon, that we are not in need of.  Neither are any other nations, but of course we can’t expect that just because the United States proposes a test ban, other nations will line up to join in renouncing hypersonic missiles. That is one reason why I am not proposing the US self-impose a unilateral moratorium, although it wouldn’t hurt if we suspended testing for a while to show good faith. What we can reasonably hope is that other nations will see their shared interest in avoiding or slowing a dangerous escalation of the arms race. If not, we will resume our programs, while still advocating that everybody agree to stop. Let the Russians, the Chinese, the Indians or the French play the spoilers, and let’s seize the moral high ground. The fact that others might not join us there is no excuse for not going to the mountaintop and calling them to join us. Indeed, if we are unwilling to do so, others have every reason to be cynical about our real motives and intentions. I’m not sure myself that I know what those are. But I am reasonably sure that hypersonic missiles will not help to make America stronger or more secure, because everybody we might want to target with them will soon enough get their own, and the world will then be a more dangerous place.

version 1.1

An “ASAT Test Ban” is Not Arms Control

Writing in his regular column in Foreign Policy, @armscontrolwonk Jeffrey Lewis has recycled an old proposal that Bruce Macdonald and others have advocated for years: seek a global ban on tests of “debris creating” anti-satellite (ASAT) weapons, particularly hit-to-kill ASATs that impact their targets at thousands of meters per second and create thousands of small pieces of debris which can stay in orbit for years or centuries. The debris creates hazards for satellites and spacecraft, and poses the threat of a “chain reaction” of collisions which might make orbital space unusable.

Lewis summarizes his proposal as “no blowing things up in space that leaves a bunch of space junk.” However, while purporting to ban tests of hit-to-kill ASAT weapons, this rule would permit testing of exactly the same weapons against non-orbital targets such as ballistic missiles. China did just that in 2010, testing the same or an improved version of the weapon they’d tested three years earlier against a satellite, this time calling it a ballistic missile defense (BMD) test. Unlike China’s 2007 test, which earned worldwide condemnation for its massive contribution to orbital debris, the 2010 test was clean because of its low altitude (250 km), and because neither the interceptor nor the target missile were on orbital trajectories. The US in 2008 tested an SM-3 missile, nominally a BMD weapon, against a satellite, on an orbital trajectory but at the same low altitude, and has conducted at least 15 more tests of SM-3 against missile targets since then.

The Tuesday Test Ban

The crucial point is that testing against missiles or other non-orbital targets is completely sufficient to validate a weapon as an ASAT. Like other “ASAT test ban” proponents, Lewis seem to be pretending that if a weapon is only tested against missiles and not “against live targets in orbit,” then the weapon might have “a latent capability to threaten space assets” but somehow that “latent capability” would not be an actual capability. However, Lewis does not actually claim that not testing against satellites would imply some uncertainty about whether the weapon would work against a satellite. If that were true, it would confer some real arms control value to his proposal. So I am guessing the reason he doesn’t say it is because he knows it isn’t true. It would be like saying that if the US only tests its ICBMs against Kwajalein, that leaves some doubt about whether they could hit Beijing.

Still not convinced? It would be like saying that if missiles are only tested on Thursdays, there would remain some doubt about whether they would function on Tuesdays. Trust me, I’m a physicist. When an interceptor and target approach each other out in space, there are various factors which may make a successful intercept more or less likely. One is the relative speed, aka closing speed, of the interceptor and target. This is not a strong function of any difference between orbital and suborbital targets. It is a strong function of the geometry chosen for the intercept and, in general, especially for satellite intercepts, the closing speed can be chosen to be arbitrarily slow. Also relevant is whether the sun is shining on the target and at what angle, whether the interceptor is approaching the target from below and viewing it against the blackness of space (or whether the Moon or Sun or some other interfering space object is behind the target) or approaching from above and viewing the target against the Earth, or against the limb of the atmosphere, and myriad other factors, none of which is dependent, in any systematic way, on whether the target’s velocity vector (relative to Earth’s center) happens to be one that, at given altitude, constitutes an orbit.

To put it another way, all objects in near-Earth space, other than when they are being boosted by a rocket, are always on elliptical trajectories with the Earth’s center at one focus. It’s just that if the ellipse happens to be one that crosses the atmosphere and intersects the Earth, we call it “suborbital.” If it misses the atmosphere, we call it an “orbit.” Missiles and missile targets are generally on suborbital trajectories. But out in space, in terms of the physics of an intercept, whether the target is destined to hit or miss the Earth simply does not matter at all.

People who really don’t understand this issue, which I assume does not include Lewis, may know that missile trajectories are described as “suborbital” and the speed of an ICBM, relative to Earth and at apogee (highest point) will be a bit less than that of a satellite. They may think, therefore, that hitting a satellite is a bit more demanding. But that is not true, either. As Lewis acknowledges, “China’s so-called missile defense tests represent a big threat to U.S. satellites. While shooting down missiles may be hard, shooting down satellites is easy.” This is mainly a function of target size and visibility, and the absence of countermeasures which might accompany a missile attack. It’s also due to the fact that satellite orbits are known in advance, while a missile defense must respond to a sudden attack with incomplete knowledge of where the missiles may be coming from, where they may be going, and when. It has nothing to do with generic differences in the trajectories of satellites and missiles.

In reality, the range of closing speeds and intercept geometries for ASAT and BMD missions are broadly overlapping, and the capabilities of an interceptor for ASAT use, under the full range of relevant conditions, can be thoroughly explored in tests that do not involve orbital targets and do not leave debris in orbit. Not only is a test against an “orbital” target not needed; it adds zero information that can’t be gained in “suborbital” tests.

The intercepts can even be limited to low altitudes, as the US does and as China did in 2010, in order to ensure that no debris is left in orbit. A system that is intended to intercept satellites in high orbit (LEO, MEO, GEO, etc.) can simply be tested at low altitude to verify its intercept capability, and flown to high altitude to verify its reach; in fact, both exercises could be combined in a single test.

Thus, as arms control, such an “ASAT test ban” would have absolutely no technical meaning, and could at most have political value. It would vaguely suggest a repudiation of using hit-to-kill weapons as ASATs, and it would prevent another highly provocative explicit demonstration of the capability to do so. It might also have some norm-setting value, although the fact that nations continue to develop, test and deploy weapons which have actual (not just “latent”) ASAT capability, and that may be intended for that role, might cast doubt on the value of whatever norm we’re talking about. I think the best one can say for this idea is that it would be space environmental protection. But then, it might only spur the further development of non-debris creating Green ASATs.

Real arms control limits weapons or places material obstacles in the path of their development, testing, production and deployment. This proposal does neither.

A Better Idea

What would have some meaning is a complete ban on any collision tests above the atmosphere. That means no “missile defense” tests involving an actual intercept, either. System tests and flight tests that include “flybys,” where the interceptor is instructed to pass by a target at close range but not hit it, could still be permitted, but the lack of an actual collision would leave some genuine uncertainty about the readiness of a system, whether for ASAT or BMD missions.

American analysts, including arms control proponents who are unwilling to challenge the reigning political assumptions, reflexively reject the notion of any limits on missile defense activities, despite the strategic destabilization these activities cause and despite the ultimate futility of any attempt to provide a meaningful shield against nuclear weapons.

But ironically, it is BMD critics who have historically demanded increased testing, while the successive agencies in charge of the program have resisted such demands. We know that these systems can hit targets; they’ve done it many times. When they fail, it is usually not just a narrow miss, but rather some catastrophic system failure, and until the source of the problem has been located and fixed, there is little point in staging further failures.

Under a collision test ban, testing and debugging of existing BMD systems could continue, without having to worry about embarrassing failures to actually hit something. The hardest (and almost certainly insoluble) problem facing BMD is the discrimination of warheads from decoys, or of decoys that contain warheads from decoys that don’t. Work on that problem can continue and does not require any collision tests.

If BMD has any value at all, it is to plant doubt in the mind of a potential attacker that a (possibly limited) first strike would achieve predictable results. On the other hand, the greatest danger of BMD is that decision makers will take risks because they believe the BMD can be relied on to work if needed. Uncertainty about how well the BMD would work is therefore stabilizing in the same way that uncertainty about how well offensive missiles would work is stabilizing. We don’t want anyone feeling too confident.

An exoatmospheric collision test ban would be stabilizing and would put a real obstacle in the path of the space arms race. An ASAT-only test ban would limit the creation of orbital debris, but would not create any impediment to the further development, deployment and proliferation of ASATs either nominally disguised as BMD or explicitly deployed as ASATs.

The Real Space Arms Race

Another point not mentioned by Lewis is that the latest (non-collision) Chinese test took place just after the US launched GSSAP, an advanced space surveillance system consisting of two satellites which will travel around the geosynchronous belt and photograph and probe everything that is up there. They could be directed to interfere with or crash into other satellites, although this is unlikely, or they could be carrying weapons, although this is also unlikely. What is likely is that when China gets around to launching a system like this, some Americans will make all kinds of accusations.

Lewis rightly warns that in addition to the US and China, Russia, Japan, Israel and India are among the nations “investing in hit-to-kill systems that can be used for missile defense or anti-satellite applications.” We are indeed at the threshold of “A world filled with hit-to-kill anti-satellite missiles” and indeed that “should be very disturbing.” But this is inescapably the result of United States policy, which for the past 3+ decades has rejected space arms control and pursued the technology of space weapons. By now we have deployed a substantial hit-to-kill ASAT arsenal, nominally as missile defense, and are pursuing more advanced and more usable Green ASATs, on top of our considerable capabilities for jamming and electronic/cyber warfare against space systems. You reap what you sow. Maybe arms control proponents should begin to sow some better ideas.

I have a paper covering much of this in greater detail, posted here.

UPDATE: Regrettably, Lewis has declined to respond to this critique, and blocked my comments about it on (I can’t comment on FoPo because I don’t subscribe). In a Twitter exchange, he was dismissive and claimed that the arguments above had been “preempted” by his column. I don’t know exactly what he meant, but my best guess is that he admits his proposal is like a Tuesday test ban, but maintains that somehow it would still be meaningful — the vague suggestion, which I wrote about above, that in this case, the lack of testing on a Tuesday really does sow some doubt about whether a weapon will work on Tuesdays. Maybe he meant something else. I’d love to have him explain. Comments are open.

Autonomy without Mystery: Where do you draw the line?

The words “autonomy” and “autonomous” can take on various shades of meaning and lead into fascinating discussions in various contexts, including philosophy, politics, law, sociology, biology and robotics. I propose that we should ignore all of this in the context of autonomous weapon systems (AWS).

We should also not speak of “degrees” or “levels” of autonomy, but seek principles and definitions that point, as directly as possible, to a Yes or No answer to the question “Is this an autonomous weapon?”

In fact, the US Department of Defense has given us a firm foundation for such a definition in its Nov. 2012 Directive on Autonomy in Weapon Systems. According to DoD Directive 3000.09, an autonomous weapon system (AWS) is

A weapon system that, once activated, can select and engage targets without further intervention by a human operator.

I propose that this should be universally accepted as a starting point.

What remains is to clarify the meaning of “select and engage targets” – and also, to negotiate which autonomous weapons might be exempted, and under what conditions, from a general ban on AWS.

Select and Engage

“Engage” is not too hard. Weapons engagement of a target means the commitment of an individual violent action, such as the firing of a gun, the dropping of a bomb or the crashing of a missile into a target, with or without the detonation of a warhead.

It is thus potentially a more fine-grained term than “attack” as used in international humanitarian law (IHL), the law of war. An “attack” might encompass many individual engagements, and accordingly an attack decision by a commander may authorize many engagement decisions, yet to be made, by individual soldiers or subsidiary commanders.

As used in DoDD 3000.09, engagement includes both lethal and non-lethal effects, against either personnel or materiel targets.

“Target” is slightly more difficult, if only because the status of an object as the target of a weapon is an attribute of the weapon system or persons controlling and commanding it, not of the object itself. When exactly does an object become a target?

Fortunately for present purposes, the word “target” appears in the Pentagon’s definition only as an object of the verbs “select” and “engage.” Therefore, we can say that an object becomes a target when it is selected; before that it may be a “potential target,” another term which appears in DoDD 3000.09.

Of course, selection must occur prior to engagement; an object harmed without having been selected is called “collateral damage,” be it a house, a garden, or a person.

The word “select” is where the serious issues hide, and where the clarity of DoDD 3000.09 begins to break down.

Please Select a Target

The Directive defines “target selection” thus:

The determination that an individual target or a specific group of targets is to be engaged.

But what constitutes “determination”? Is this an intelligence finding, a command decision given to the operator of a weapon system, or is it the operator’s command given to the system? It could be any or all of these, but if the operator is the final human in the so-called “kill chain” or “loop,” (and the one person in whom all of these functions may be combined), then the transaction between the operator and the system is where we should focus our attention.

The classic way for a weapon system operator to determine that a target is to be engaged is to throw a rock or spear toward it, thrust a sword or aim and fire a gun. The weapon may go astray, but there is no question that the intended target is selected by the human.

One may imagine that in a remotely operated, non-autonomous weapon system, target selection might mean that humans watch a video feed, and see something that looks, to them, like a valid military objective. A human commander determines (legally and operationally) that it is to be engaged. Using a mouse, joystick or touchscreen, a human operator designates the target to the system. This, plus some other positive action to verify the command, determines – materially – that the target is to be engaged.

However, DoDD 3000.09 admits a far more permissive interpretation of “target selection.” This becomes clear from its definition of “semi-autonomous weapon system (SAWS).”

Two types of SAWS are described. The first are

systems that employ autonomy for engagement-related functions including, but not limited to, acquiring, tracking, and identifying potential targets; cueing potential targets to human operators; prioritizing selected targets; timing of when to fire; or providing terminal guidance to home in on selected targets, provided that human control is retained over the decision to select individual targets and specific target groups for engagement.

This suggests that “target selection” could mean as little as giving a “Go” to engage after the system places a cursor over potential targets it has acquired, tracked, identified, and cued to human operators. Indeed, it is not at all clear what “human control… over the decision” means in this context.

The Meaning of Human Control

Noel Sharkey has discussed “meaningful human control” in terms of five “levels of control”:

  1. human deliberates about a target before initiating any attack
  2. program provides a list of targets and human chooses which to attack
  3. program selects target and human must approve before attack
  4. program selects target and human has restricted time to veto
  5. program selects target and initiates attack without human involvement

Sharkey focuses on deliberation as the thing that fulfills the requirements of IHL and makes “human control” meaningful. In the first level, the human “deliberates” and, like a rifleman, “initiates” the attack. In level 2, the human is more in the position of a senior commander, reviewing a “list of targets” and choosing among them. If no choice is made, there is no attack. However, in view of historical experience and “automation bias,” Sharkey concludes that the meaningfulness of this level of control is contingent. Machine rank-ordering of targets, for example, would tip the scale as “there would be a tendency to accept the top ranked target”. Insufficient information, time pressure, or a distracting environment could also render this level of human control meaningless.

Sharkey rules out level 3 and higher levels of autonomy as not meeting the requirement for human deliberation. However, it is not clear how “program selects target” differs from “program provides a list.” The list would probably not be on paper. Could it be a “gallery” like a dating site? Or a textual list with hyperlinks to intelligence reports, summaries of which open up automatically when the operator hovers over the list item? Could the system provide both a visual display of targets, and a textual list? Wouldn’t the visual display likely convey more information than just a short verbal description?

The best interpretation of the difference between levels 2 and 3 is that in the former, the human “decision maker” is being told “You decide what to target; here are some possibilities.” In the latter, the human is being told “Here are the targets; which ones should be hit, or not?” It’s a subtle difference, and in every technical and operational respect, the two situations might be identical.

Thus, while the focus on human deliberation – and the factors that might encourage and assist, or discourage and hinder it – is important and essential, the effort to enumerate and order a definitive hierarchy of autonomy levels once again leads to further questions.

The desired number of levels is two. We want to know which systems are autonomous.

Similarly, we should seek a definition of “human control” which is free of degrees of meaning. Weapons and conflict must always be under human control, period. Given any configuration of technical systems and human beings, we need to be able to decide whether this principle is satisfied, or not.

Of course, people may have more or less control of a machine, as becomes clear when we think of situations that arise with automobiles or nuclear power plants. But again, it is probably not possible to iron out all the dimensions of complexity to create a definitive hierarchy of control levels. Is it clear that the console providing more data and more knobs equates to more control? Perhaps, in general, but probably not always.

On the other hand, the operators of such machines accept moral and legal responsibilities to maintain control, and are held accountable for the consequences if they fail to do so.

I would propose, similarly, that acceptance of responsibility should be the standard for human control. Thus the principle of human control corresponds with the principle of human responsibility.

Just as commanders are not responsible for crimes committed by soldiers under their command, unless the commander directly ordered the criminal actions, so humans cannot accept responsibility for decisions made by machines. But if humans are to make the decisions, it is their responsibility to maintain control of the machines.

This makes it the responsibility of a human commander to determine whether she has sufficient information to make an attack decision, independent of any machine judgment. It also makes it the responsibility of a human operator to determine whether the system gives him effective control of its actions. Both should be held accountable for any “accidents.” It is not acceptable to say “the trigger was too sensitive” or “the computer said it was a valid target.”

This also suggests a system of accountability as a possible basis for the implementation of an AWS ban and verification of compliance with it.

Short-Circuiting Arms Control

The Pentagon’s distinction between semi-autonomous and autonomous weapons would also fail to support meaningful arms control.

The first type of SAWS includes systems which have every technical capability for fully autonomous target selection and engagement, requiring only a one-bit signal, the “Go” command, to initiate violence. A SAWS of this type could be converted to a fully autonomous weapon system (FAWS = AWS in the Pentagon’s lexicon) by a trivial modification or hack. Thus, permitting their development, production and deployment would mean permitting the perfection and proliferation of autonomous weaponry, with only the thinnest, unverifiable restraint remaining on their use as FAWS.

For the second type, there is not even a thin line to be crossed.

Fire and Forget

The second type of SAWS in DoDD 3000.09 comprises

“Fire and forget” or lock-on-after-launch homing munitions that rely on TTPs to maximize the probability that the only targets within the seeker’s acquisition basket when the seeker activates are those individual targets or specific target groups that have been selected by a human operator.

This means that SAWS, as defined by the Pentagon, would include a “homing munition” that, at some point after its launch, without further human intervention, acquires (i.e. detects the presence of, and roughly locates, via its seeker) a potential target or targets, thereafter tracks it or them, identifies it or them (either as particular objects or members of targeted classes, based on some criteria), and determines that one or more of the potential targets it has acquired and is tracking is to be engaged.

However, the definition (thus the policy), does not regard this as “target selection” by the weapon system. Rather, selection is deemed to have occurred when the human operator launched the weapon, relying on tactics, techniques and procedures (TTPs) to “maximize the probability” that there are no “targets [sic]” which the seeker might mistake for the targets “selected” (i.e., intended) by the human.

The “seeker’s acquisition basket” may be regarded as a product of two functions (which may be further decomposable): the seeker’s inherent response to various objects – as viewed from various directions and distances under various conditions – and any constraints in space and time which are applied by the operator.

For example, a “beyond visual range” (BVR) anti-aircraft missile such as AMRAAM or an “beyond line of sight” (BLOS) anti-ship missile such as LRASM (under development by Lockheed Martin for DARPA) follows a flight path roughly predetermined at the time of its launch, or it may receive updates telling it to follow a slightly different course, on the basis of new information that the target(s) have moved.

At some point, it begins looking for the target(s), that is, “the seeker activates.” This may in fact be a multistage process, as the missile approaches its target(s) and passive radar, active radar, IR and optical sensors successively come within range and tactical appropriateness (e.g., active radar may be avoided for reasons of stealth).

In effect, the seeker’s search area is restricted by the flight plan of the missile, which may also be programmed to deactivate if it fails to locate an appropriate target within a designated area or time frame.

Alternatively, a “loitering missile” or “wide area search munition” may be instructed to search within a designated “kill box.” Examples include Israel Aircraft Industries’s Harpy “suicide drone” which searches for enemy radars based on the signatures of their signals, and Lockheed Martin’s canceled LOCAAS, which was intended to autonomously recognize and attack a variety of target types, including personnel.

This is the meaning of “rely on TTPs”; the operator is supposed to follow procedures which minimize the probability that objects which the seeker might mistake for an intended target will be found within the area and time in which the missile will be searching.

Thus, if the seeker is not itself capable of distinguishing a cruiser from a cruise ship, a tanker plane from a jumbo jet, or a soldier from a civilian, it is the weapon system operator’s responsibility to follow procedures that at least minimize the probability that the wrong kind of object will be found within the seeker’s acquisition basket.

In practice, seekers will, either explicitly or implicitly, map their sensor data, via some computation, to a Bayesian probability that a potential target is an intended target. Since the missile must either engage that target or not, some threshold will have to be set for the decision to identify a target.

Once the decision to engage a particular target has been made, the missile may be considered to have “locked-on” to the target, which will be continually tracked without ambiguity. No new decision need be made, although complications may arise if the system is programmed to abort engagement on some conditions, such as its collection of further information about the target, or the possible presence of civilians.

Simpler missiles, in contrast must “lock on” before launch, under direct human control, and with a human operator selecting the target. If the target succeeds in evasive countermeasures, they may “break lock,” and then generally cannot re-acquire the target.

The Necessity of Decision

From this discussion, it becomes clear that the “lock-on-after-launch homing munition” does, in fact, make lethal decisions autonomously – it must decide whether to accept particular sensor data as the signature of an intended target, and to engage that target, or which one to engage if there are more than one; else it may continue searching, or decide to abort its mission.

It must make these decisions without further intervention by a human operator. The authors of DoDD 3000.09 may have wished to avoid using the word “select” in this context, but particularly in the case that there are multiple objects which might be engaged, it is clearly the weapon that makes the final determination that a particular potential target “is to be engaged.”

It also becomes clear that the system which autonomously decides whether, and which potential targets to engage, is not being controlled by humans when it does so. It is controlled by environmental conditions which may be somewhat unpredictable, and by its programming – that is, by itself. It is an autonomous system. There is no mystery in this; it is simply an operational fact.

Some may argue that the program was written by humans, but this position makes machine autonomy impossible, since machines owe their entire existence to humans. Besides, it is by no means certain that, even today, let alone in the future, software will be written and systems engineered solely by humans. In reality, the complexity of present-day technology already precludes meaningful human control or accountability for aberrant behavior. No one is going to be court-martialed for a bug.

The classification of “fire-and-forget or lock-on-after-launch homing munitions” as semi-autonomous may be convenient for the Pentagon, and some AWS ban proponents may prefer to accept it as well, in order to avoid the fact that many existing missile and some underwater systems fall into this category. However, this is an attempt to avoid the unavoidable, because not only do these systems already cross the line of de facto autonomous target selection and engagement, but there is no recess in the domain of fully autonomous weapons which is not accessible by further extension of this paradigm. To put it bluntly and colorfully, this is a loophole The Terminator could walk through.

Inappropriate Levels of Robot Judgment

A missile like LRASM may not fulfill the popular conception of a “killer robot,” but as depicted in a video published by Lockheed Martin, it is designed not only to autonomously maneuver around “pop up threats” posed by enemy radar and antimissile systems, and autonomously recognize the “hostile surface combat group” it was sent to attack, but also to discriminate the various ships in that group from one another, and determine which to engage. Several missiles might be sent in a salvo, and might coordinate with each other to determine an optimum target assignment and plan of attack.

The key fact is that these missiles would make final targeting decisions, based on fairly complicated criteria. DoDD 3000.09 places no upper limit on the sophistication of the sensors and computers or complexity of the algorithms and criteria for “lock-on-after-launch,” nor are “homing munitions” obviously limited to flying objects. If ships may be identified on the basis of signatures in their radar returns, might targeted killing not be carried out on the basis of face recognition and other biometrics?

More generally, might combatants not be identified by signatures of their behavior, dress, or any other cues that human soldiers rely on to distinguish them from civilians? Suppose a robot is sent out that is programmed to fire upon anyone who matches the biometric signatures of certain known hostile persons, or who is determined to be taking specific hostile actions. Is this not a fully autonomous weapon system?

The central conceit of DoDD 3000.09 is that “target selection” will be done only by humans or by autonomous weapon systems which are either anti-materiel weapons with non-kinetic effects, or else have been explicitly approved by senior officials; and in all cases, commanders shall exercise “appropriate levels of human judgment in the use of force.” But is there any limit to the levels of robot judgment that may be appropriately exercised by “semi-autonomous weapon systems”? DoDD 3000.09 provides no answer.

The negotiation of limits on SAWS is thus unavoidable if the project of a ban on FAWS is to succeed. It must be acknowledged that the distinction of the first kind of SAWS from FAWS is so thin as to be meaningless, both from the point of view of meaningful human control and of effective arms control. The distinction of the second kind of SAWS from FAWS is nonexistent.

The Line Drawn

I conclude that the first kind of SAWS must be subject to restrictions and accountability requirements if a ban on FAWS is to be meaningful; and the second type of SAWS must be acknowledged as actually a type of FAWS.

Is it too much to ask for a ban on missiles like AMRAAM and the autonomous Brimstone [see endnote], which already exist and have been used in war, or even missiles like LRASM which are close to reality? Perhaps, but arms control in the past has often entailed the scrapping of existing systems. Israel has already offered a successor to Harpy called Harop, which adds an electro-optical system and two-way control link so that the system is not necessarily fully autonomous.

It is also possible to say “No autonomous weapons, except these….” If it turns out political circumstances dictate that certain things must be grandfathered, the negotiation of explicit allowances and limits is preferable to leaving open a road to the perfection and proliferation of autonomous weapons of every kind. We should not accept a diplomatic fig leaf behind which an autonomous weapons arms race can rage unrestrained.

The point of such negotiations should not be to draw a line defining autonomy; that has already been done. A system is autonomous if it is operating without further human intervention. But a “ban on autonomous weapons” might provide explicit exceptions either for particular systems or particular classes of systems, based on detailed descriptions of their characteristics. Philosophical clarity is not the issue, for we have already achieved that. From this point forward, it’s just old-fashioned horse trading.


Endnote. The Peace Research Institute of Oslo (PRIO) has recently entered this discussion with a briefing paper on “Defining the Scope of Autonomy: Issues for the Campaign to Stop Killer Robots,” authored by PRIO researcher Nicholas Marsh, whose past work has focused on the arms trade and small arms in civil war. In this report, Marsh discusses at length the problem of fire-and-forget missiles such as MBDA’s Brimstone, which has both man-in-the-loop and fully autonomous modes. The latter capability, Marsh points out, has already been used in the Libya campaign.

Marsh also discusses the British Ministry of Defense’s statement about autonomous weapons, contained in its 2011 Joint Doctrine Note on The UK Approach to Unmanned Aircraft Systems. I think he places a bit too much emphasis on the statements on page 2-3 of the report about autonomous systems being “capable of understanding higher
level intent and direction” and being “in effect… self-aware” and “capable of achieving the same level of situational awareness as a human.” Whatever the authors of these statements had in mind – before the emergence of this issue into the public sphere – the British government has since issued statements indicating that it understands the issue of autonomous weapons in terms of human control, and that “the MoD currently has no intention to develop systems that operate without human intervention in the weapon command and control chain”. As points out, this still leaves a good deal of ambiguity and wiggle room for the MoD, which is, as Marsh demonstrates, actually pursuing weapons and policies similar to those of the United States.

Marsh, like the Campaign itself to a large degree, grounds the issue in humanitarian disarmament, which he states “is designed to reduce human suffering rather than to manage the affairs of great powers.” Yet this prescriptive distinction is evidently problematic, since if humanitarian disarmament is to be effective in reducing human suffering, it must effectively restrict the actions of powers great and small; nor are these restrictions very different, other than in the particular weapons covered, from treaties not framed as ‘humanitarian’ in origin, such as strategic or conventional arms limits and bans.

In fact, most of Marsh’s paper points, whether intentionally or not, to the inadequacy of a model of humanitarian disarmament rooted only in the landmines and cluster munitions bans. These treaties do a great deal to reduce civilian suffering in war and its aftermath, but their restriction on the freedom of action of the major powers is relatively marginal – and those same powers nevertheless have largely refused to sign on, although their actions are to some extent inhibited by the norms that have been established.

In the case of autonomous weapons, we are seeking to call off the next great arms race.

Autonomous weapons are potentially the most important and foundation-shaking development in military and global security affairs in decades. They are not merely a threat to human security from their potential for indiscriminate slaughter, but are even a threat to international peace and security.

Therefore, we must address the issue in its widest scope, and on the basis of the strongest possible foundations: the principles of human control, responsibility, dignity, and sovereignty, and that conflict is human – and must not become autonomous.

Is Russia leading the robot arms race? Not really.

bearwoods.5Is that an autonomous bear in the woods?

Russia has been getting a lot of bad press lately, much of it richly deserved, IMHO. Since this opinion is widely shared, it might be tempting to try to pin every kind of villainy on Vladimir Putin, especially if the goal is to vilify the villainy by associating it with a known villain, rather than the other way around.

So let me be clear: this isn’t about fairness, and it isn’t any enthusiasm on my part for Russian aggression nor, lord knows, for autonomous weapon systems (AWS). But given the evidence that I have seen, I think it would be a bit premature to credit Russia, as David Hambling did in New Scientist, with “taking the lead in a new robotic arms race” while the “squeamish” West holds back, or to accuse Russia of “norm anti-preneurship” aimed at disrupting “international norm-building efforts to regulate the deployment of fully autonomous weapons,” as UMass Prof. Charli Carpenter did in her blog, citing Hambling’s report. [Note: After I emailed her about the unreliability of Hambling’s spin, Prof. Carpenter updated her post to reflect “uncertainty about the nature of Russia’s actual announcement”. Besides, she’s done great work on this issue.]

No, I’m not out to defend Russia, which if not actually taking the lead, is openly declaring its intent not to be left behind. Last June, Deputy Prime Minister Dmitry Rogozin, in charge of the defense industry, announced the decision to establish a national laboratory for military robotics, presumably similar to the US Navy’s Laboratory for Autonomous rogozinSystems Research, which opened in 2012. Rogozin was previously associated with the push to create a Russian version of DARPA, the agency that oversees a great deal of the Pentagon’s cutting-edge robotics research.  In December, Rogozin announced plans to spend $640 billion on military modernization through 2020, with an emphasis on robotics. Most disturbingly, in a March 21, 2014 article in Rossiya Gazeta, Rogozin summoned Russian industry to “create robotic systems that are fully integrated in the command and control system, capable not only to gather intelligence and to receive from the other components of the combat system, but also on their own strike [emphasis added].”

The risk here is that tarring Russia as the Mordor of the robot orcs will contribute to the misperception that the US and its allies are any less bullish on AWS than Russia has lately advertised itself as being. This could be harmful in two ways: It could bolster America’s own full-speed-ahead policy for development and use of AWS – which, contrary to popular belief, does not slow the development of fully autonomous weapons – and it could push Russia into a defensive corner in diplomatic discussions, such as those upcoming next week in Geneva (which both Charli and I will be attending). Russia will be a participant in those discussions, and previous Russian statements at the UN have been supportive of emerging public concerns about the issue, though not any particular norms.

I can hear it already: “You want to ban killer robots? Tell that to Putin!” Of course, China also serves this rhetorical purpose, and those wishing to point the finger that way can cite Lijian, a stealth drone that looks a lot like the X47B but as far as we know has not yet landed on an aircraft carrier, or Norinco’s new GS1-AT “smart” cluster munition, which resembles an artillery version of the sensor-fuzed weapon that the US has had in service since the 1990s.

In comparison, despite a history with drones dating to the 1950s, Russia “lags behind other militaries in building unmanned aerial combat vehicles, according to U.S. officials”—and when Bill Gertz reports that, it’s reliable. In June, Defense Minister Sergei Shoigu bemoaned Russia’s “technological backwardness” and “shortage of skilled personnel” and called Russian military robots “inferior to their foreign analogs.” Putting an optimistic spin on things, one Russian official told “From the point of view of theory, engineering and design ideas, we are not in the last place in the world.” Not satisfied with that, Defense Minister Shoigu called for doubling the pace of “developing the combat robotic equipment.”

altius-mRussia is expected to soon begin testing an armed drone comparable the Reaper, but it is not expected to be deployed before 2016. I have not found any reports of actual Russian unmanned maritime systems, only talk. In unmanned ground vehicles (UGVs, appropriately enough) the Soviets first experimented with “teletanks” in the early 1930s. The present story stems from Hambling’s account in New Scientist of reports that Russia may soon use armed UGVs to help guard its ICBM sites.

Мобильный робототехнический комплекс

On March 12, Maj. Dmitry Andreev, a spokesman for the Strategic Rocket Forces (RVSN) told RIA Novosti (in Russian) that

“In March, the Strategic Missile Forces began to explore issues of the application of mobile robotic system (military) (MRK BH), created for the protection and defense of the Strategic Missile Forces facilities”

This was part of “retooling” planned for 2014 at five sites “for new types of security systems, including the use of modern technological advance in the development of robotic systems.” RIA’s English version of the story seemed less tentative, stating that “testing” began in March, and the bots “will be deployed at five ballistic missile launch sites.”

TASS (in English) quoted Andreev as saying the MRK BH is designed for

“reconnaissance, spotting and destroying stationary and moving targets, fire support of military units, patrolling and protection of important facilities.”

Furthermore, according to TASS,

It can provide an option to conduct combat actions in the nighttime without de-masking factors and an option of aiming weapons, tracking and hitting targets in automatic and semi-automatic control modes. The advanced combat system is equipped with optical electronic and radar reconnaissance stations.

TASS’s Russian version does not differ significantly in these details. A search in Russian and English did not turn up other independent reports of Andreev’s statement, and in an email to me, Hambling confirmed that he did not have additional details of the statement.

mrk topolYou might want to worry more about the big thing in the background.

A month later, Novosti VPK, (literally “Military Industrial Complex News”—Russians apparently use the term without irony) reported that tests of new security equipment, including the MRK BH as well as a larger manned system, would be conducted on April 21 and 22 at Serpukhov Military Institute, a branch of the RSVN Academy.  Russian TV news reports posted that day, as well as a YouTube video posted on April 25, show an apparent demo-expo of the new security equipment. A man in a black suit grabs hold of a joystick and presses a red button. The robot’s engine starts and it drives off. A squad of soldiers, guns raised, pad behind the MRK. The MRK leads a convoy of transporter-erector-launchers (which shuttle the nuke-lobbing missiles around to frustrate targeting by you-know-who). The MRK looks up, down, left and right. At a target range, it fires its machine gun.

“Mobile robotechnic complex” or “mobile robotic system” (abbreviated variously as MRC, MRK, MPK or MRTK) is actually generic Russian terminology for mobile robots, mostly unarmed, made by several companies. MRK BH appears to be a variant of the MRK-002-BG-57, the sole robotics product listed on the website of the Izhevsk Radio Plant (IRZ), and demonstrated at the Russia Arms Expo last September. A few minor changes are visible; five small housings at the front and sides, possibly for cameras, have been removed, and replaced by four forward-directed headlight batteries and a swivel-mounted camera just below the gun mount (which has its own optical and infrared sighting cameras). A padded skirt, possibly armored, has been fitted around the chassis. IRZ reports the system weighs approximately 1 ton, and has “cruising range” of 250 km at up to 35 km/hr., implying it is powered by a gasoline or diesel engine, audible in some of the videos. It also has batteries that run down after 10 hours, or 7 days in “sleep mode.” A Russian Wikipedia page says the chassis measures 1.7 x 3.7 meters.

All of the videos show the remote control unit, which reportedly has 5 km range. There is no doubt that the system can be fully controlled by a human operator. The interesting question is, Does it also have capabilities for lethal autonomy? If so, how do they work, and how might Russia use them? How does this compare with things that the US and its allies are doing?

How scary is that bear?

Kevin Fogarty in Dice, citing the same RIA and TASS reports, reported that the robots “are able to select and fire on targets automatically.” The word “select” here echoes the language of the Pentagon’s policy directive, which defines target selection, rather vaguely, as “The determination that an individual target or a specific group of targets is to be engaged.” Under the US policy, this is the key thing which a robot is forbidden to do – except when it is allowed to do it. Drawing the comparison more sharply, Hambling wrote 

These robots can detect and destroy targets, without human involvement…. Andreyev describes the robots as being able to engage targets in automatic as well as semi-automatic control mode. US policy, on the other hand, says a person has to authorise when weapons are fired. Drones don’t fire missiles on their own, but act as remote launch platforms for human operators.

First, while the statement about drones (today) is accurate, the one about US policy is not. The policy green-lights the development, acquisition and use of weapons which have every technical capability for autonomous target selection and engagement (which includes killing people). Certain general criteria have to be met, similar to criteria set forth for other high-tech weapon systems, and if the system is intended to be autonomously lethal, three senior officials must “ensure” that it meets those criteria. If it does, the system can be approved, and could be turned loose to hunt and kill humans.

In fact the US has a number of systems in use, such as Patriot, THAAD, C-RAM and Aegis, which can make engagement decisions autonomously. These systems are defensive, and do not specifically target humans, but Patriot killed pilots in friendly-fire incidents during the Iraq invasion, and Aegis, operating semi-autonomously, was involved in the Iran Air 655 tragedy. On the offensive side, missile systems are in development which are intended to identify their targets fully autonomously, and would kill people because they would target ships, tanks, planes, or… people. These missiles may be considered “semi-autonomous” under the policy, and thus exempt from senior review. Operators of these systems are supposed to apply tactics, techniques and procedures to ensure that the missiles don’t go after the wrong targets.

Furthermore, if a system is not intended to be used as a fully autonomous lethal weapon system, it can have every technical capability needed to do so, e.g. it can acquire, track, identify, and prioritize potential target groups, and control firing, and still be considered semi-autonomous provided it asks for permission before actually firing. Obviously, a trivial hack would make this a fully autonomous weapon system, and developing such systems means perfecting the technology of fully autonomous weapons.

OK, OK, but what about the bear?

Both Hambling and Fogarty appear to assume that the Russian statement about “automatic and semi-automatic control mode” means the same thing as the Pentagon’s “autonomous and semi-autonomous weapon systems.” But does it?

According to IRZ, the “Robotic system has an automatic capture and the ability to conduct up to ten goals in motion. The aim is held when moving the turntable by 360 degrees.” Another page states that it can “Automatically capture and manage objectives in motion (target is held while moving the turntable 360 ​​degrees).” Together, these two statements indicate that “automatic mode” involves target tracking and automatic fire control while the robot is in motion, but not necessarily autonomous target acquisition and selection, in the Pentagon’s terms.

Further evidence can be extracted from the previously mentioned Russian TV reports of the April tests. There are actually two segments accessible at this url, which run slugin MRK.3consecutively. In the first, the suited man, IRZ Deputy Director General Alexei Slugin, explains that “for controlling the combat module, we have a touchscreen installed here, where we can set up to 10 targets with our finger, and the targets are then automatically followed in automatic mode… the targets are captured and held.”

In the second segment, the reporter talks to Slugin, then explains that “This is a touchscreen, where you can choose up to 10 targets, then hit the button allowing fire, and the machine will commence destroying the enemy.” He emphasizes that the machine performs “under strict control of the operator. The final decision about the destruction of the target is made by the human.” A normally-covered toggle switch apparently must be uncovered and thrown for the destruction to commence.

However, following him, RSVN officer Sergei Kotlyar explains that “All automated, automatic systems, especially military systems, are piloted only by a human, only the human makes the decision. Other than that, it is fully robotic, and when performing a narrow task, e.g., if it is known that the enemy is firing or the enemy is present, there it will be using firearms itself, calculating targets and firing at them.” This suggests that the system may, indeed, have some capability to autonomously select and engage targets. But this capability is likely crude and indiscriminate. The Russian military, like the US, is sensitive about this, and maintains that the fully autonomous lethality would be activated only in hot combat.

So does this mean Russia is the bad boy on the block?

Would fielding an armed UGV with a full lethal autonomy capability, even if that capability were normally not activated, be a significant step beyond the policies, practices and plans of the US and its allies? Not if the Pentagon policy directive is given a permissive reading – and note that, since the policy is only an internal DoD directive, the Pentagon is free to read it (or not read it) any way it chooses. It is also quite similar to what South Korea and Israel are doing with stationary sentry robots.

In fact, if there is any emerging international norm, it is towards what US Navy scientist John Canning called “dial-a-level autonomy,” i.e. systems that have both human control consoles and communication links for normal operation, plus full autonomous capability for when things get real. Thus Harpy, Israel’s loitering, fully autonomous, radar-hunting suicide drone is being superseded by Harop, which adds a two-way radio control link and electro-optical system. Similarly, following the cancellation of the fully autonomous LOCAAS loitering missile, Lockheed Martin offered SMACM, with the same multimode sensors capable of autonomous target recognition, but adding a two-way radio control link.

Hambling argues that the US has not actually deployed armed UGVs, autonomous or otherwise, despite their having been in development for “decades.” He cites the 2007 trial of three SWORDS robots in Iraq, which was “cancelled,” according to Hambling, due to “uncommanded or unexpected movements.” However, Popular Mechanics, whose initial report was widely misrepresented as having said the bots were withdrawn after swinging their guns at soldiers or even injuring them, later ran statements from SWORDS manufacturer Foster-Miller/QinetiQ saying that SWORDS was not cancelled; the government had never funded more than three of them. Furthermore, the “unexpected movements” had occurred in earlier, stateside tests, and the problems had been fixed. According to a 2013 report from the National Defense Industrial Association, the SWORDSes were in Iraq for six years and “performed a combat role,” specifically “perimeter defense.” It was claimed that they had “saved lives.”

SWORDS was never deployed as envisioned. They carried M249 light machine guns, but were placed in fixed locations, and did not move, according to reports in 2008. Operational concepts would have had them going around buildings to shoot at snipers or other enemy combatants, without exposing soldiers to deadly fire. However, senior military officials at the time did not feel comfortable using them in that manner, and they were placed behind sandbags.  [NDIA report]

There could be any number of reasons why the Army chose not to drive SWORDS out on Iraqi streets, including the risk of bad PR both locally and globally – especially if they ended up killing someone accidentally – and because their tactical usefulness is probably very limited. They are heavy, easily damaged, and would need to be carried over obstacles. Lacking peripheral vision, directional hearing, or the instinct and ability to react quickly, they would be sitting ducks in urban combat.

maarsAnd today, MAARS, son of SWORDS, lives on. In recent years, it has been tested by the Marine Corps and by the Army, although no procurement contracts have been announced. We know that QinetiQ has sold at least one to the US government, and the wording of its MAARS homepage and data sheet strongly suggest that US Special Operations Command (SOCOM) is a prime target customer, so it would not be surprising if some larger number have been acquired under the black budget.

spetzbotIn that regard, Hambling also pointed me to yet another Russian YouTube video, this one a promo for FSB Spetznaz, Russia’s own version of SOCOM. It shows a series of staged vignettes, which don’t even look like serious training exercises, in which shock troops kick the butts of Chechen-looking types. In the first sequence, they storm a house with the help of a small MAARS-like robot, suggesting that such weapons might already be a standard feature of their operational arsenal (although this one may be a movie prop). However, it seems extremely unlikely that gunbots would be firing autonomously in such a scenario, with troopers running in and out and around, nicely choreographed in the video but more likely frantic in real life.

Is there any conclusion?

So now, we have a video which suggests that Russian special ops may have at their disposal small armed UGVs similar to the ones that American special ops may have had since at least 2008, and we have official reports that Russian missile troops are testing armed UGVs in a role similar to the one that American troops tested for six years in Iraq. Does that make Russia a world leader in killer robots?

Arguably it might, if in fact the Russian missile site robots are substantially larger in number or herald an imminent rapid expansion in the use of such systems, or if the RSVN intends to routinely turn the robots loose with full lethal autonomy.

But the latter seems very unlikely, both because the Russians deny it and because, if you did have MRKs patrolling missile bases in fully autonomous lethal mode, they would, with near certainty, kill people. They would kill drunken soldiers, officers whose drivers misread the map, and technicians sent to find out why an MRK was unresponsive. Sooner or later, and probably sooner, somebody would get shot. There is no good reason why the RSVN would do this. It would be insane.

It may also be premature to assume that current testing of the MRK BH will lead to its permanent adoption by the RSVN, let alone a major and rapid expansion of UGV deployment by the Russian military. The aforementioned Russian Wikipedia article (6 May 2014) is openly scornful of the MRK BH, saying it is poorly designed “to work as part of the division tactical units,” that IRZ lacks experience in military robotics, and that the design harks back to Russian and German remotely operated tanks of the 1940s. While this is only anonymous commentary, it suggests that at least some Russian military observers are skeptical and that the MRK BH may have drawbacks that will cause RSVN to reject it.

It does appear that overall, Russian industry is still sadly mired in post-Soviet mediocrity, while Putiniks are now demanding modern technology. IRZ seems to have rushed a crude model into early production, and it has been put on display as a symbol of new progress, but what it mostly reveals is not so much Russian aggression (the military officers interviewed in the videos all seem ambivalent) but rather a grim determination to meet whatever challenges a new day brings. If there is going to be a robotic arms race, Russia will not take last place in the world.

Meanwhile, the MRK BH is already world famous, so it may as well become a source of pride for Russian military robot enthusiasts. For example, the weekly “Military Industrial Courier” ( reported that “Russia took the leading position in the world in the key area of ​​advanced weapons – ground combat robots.” Its source? David Hambling’s story in “the influential British science weekly” New Scientist.

I am deeply indebted to Tanya Lokot for translation of the Russian videos.

Bursting one tiny bubble of emerging military tech hype

“Two University of Michigan scientists are on the way to developing night vision contact lenses,” read the tweet, posted by a senior fellow for defense policy at a top think-tank, and retweeted by dozens, including a level-headed defense technology journalist famous for her own investigations into federally-funded fringe science and folly. I figured there must be something to this, so I clicked on the link to a story at, whose headline made the even more surprising claim “Scientists Develop Night Vision Contact Lens.”

Below the headline, a photo showed some guy pinching a contact with what looked like printed wiring on it. The photo was my first clue that something was screwy, because I’d seen that photo before; it was the Google lens (google it), a contact lens outfitted with a glucose sensor for diabetics.

The story claimed that the scientists had “developed a prototype contact lens that enhances night vision by placing a thin strip of graphene between layers of glass. The graphene — a form of carbon — reacts to photons, which makes dark images look brighter.” What were they claiming—that the graphene, like a laser, simply amplified the light? That didn’t make any sense, and neither did any other interpretation, because a contact lens is roughly located at the aperture of the eye, not the focal plane. The wearer would see only a blur [Note 1].

I thought this had to be an April Fool’s joke. But a quick googling of “Michigan graphene contact lens” found essentially the same story reported at an indeterminable number of sites around the web. Most of these stories seemed to rely on two sources: The University of Michigan’s press release from March 16, and IEEE Spectrum’s report from the next day, which drew on the press release and also referenced the paper in Nature Nanotechnology by professors T. B. Norris and Zhaohui Zhong, published online on the 16th. Sadly, Spectrum seemed to be the source of the claim that what Norris and Zhong had done was to actually “create their infrared contact lens.” But unsurprisingly, Zhong himself, if accurately quoted by the press release, had been the originator of the “contact lens” hype.

What the researchers actually created was far more mundane: a single-pixel test cell using graphene. The authors state that graphene “is a promising candidate material for ultra-broadband photodetectors, as its absorption spectrum covers the entire ultraviolet to far-infrared range.” It has been studied for this role since at least 2008, according to the references. However, its sensitivity in past experiments has been poor. Norris and Zhong have improved on this with some clever quantum electronics engineering [Note 2]. The result is “room-temperature mid-infrared responsivity comparable with state-of-the-art infrared photodetectors operating at low temperature.” But not quite “superhero vision,” as Discovery News put it.

Norris and Zhong’s research is important because it shows that graphene can potentially be used to create very compact thermal imaging sensors. “We can make the entire design super-thin,” says Zhong in the press release. “It can be stacked on a contact lens or integrated with a cell phone.” But first, they have to make an actual imaging sensor. They think they might be able to do something in a few years. It will require integrating patterned, doped graphene with conventional silicon lithography, a major challenge in the lab let alone for industrial production.

And while a small camera, even one small enough to fit into a phone, is not an unreasonable goal for perhaps a decade down the road, the idea of putting this into a contact lens is hard science fiction at best. To do this, you would basically have to make the entire camera, or perhaps an array of them, small enough to fit onto the contact lens. Plus, you would need a micro-projector, or again perhaps an array, small enough to fit under the camera(s), which would project the image as visible light onto the retina. Plus electronics and a power source, all small enough to fit on a contact lens. If Norris and Zhong knew how to do all this – let alone how to manufacture such a thing – I would not hesitate to declare them the greatest engineers in history.

But you gotta have vision, and you gotta have funding. These days, that means you gotta have hype. And hype echoes through the media machine, from press release to sloppy science reporting to outlets like Popular Science, CBS News, Independent, Huffington Post, and scads of fleas with smaller fleas on them.

To be fair, not all of these reports are technically inaccurate, but they all serve up the hype, because the hype is the story. The actual news is just an incremental advance in one of many ways of converting infrared radiation to an electrical signal. If some day we have the technology to make a night vision contact lens, it might make use of Norris and Zhong’s graphene trick. But that is not particularly likely, since there are so many other possible schemes. To say that their work opens the way, or is the first step which may some day lead to such a capability, is simply not true. I’m not sure where the line lies where hype is so inflated that it becomes lying. But I feel that in this case it has been crossed.

Why is this worth my time to write, and yours, dear reader, to read? Because it’s a paradigm example of how the emerging tech, especially emerging military tech, hype machine works… and while this little bubble is rather insignificant in itself, it sits atop a churning foam of bubbles big and small, some of them big enough to have real consequences. Things like laser weapons, “Iron Man” body suits, missile defense…

The inability of professional journalists, analysts and bureaucrats to separate hype from realism and sense from nonsense in military technology, and the relative absence of critical technical review, leads not only to massive waste but also to destabilizing suspicions and needless arms races which lead to … what? Nothing good, that’s for sure.

Note 1: In a camera (eye), light arrives at the aperture (iris) from every direction, and the camera geometry sorts out light coming from different directions. A detector located at the aperture, e.g. in a contact lens, would be exposed to light from all directions, and could not form an image. Also, a tiny display screen mounted in a contact would flood the eye with its light and produce a uniform blur. So neither end of this works. What you’d need is an even tinier camera, and a tiny projector (onto the retina). Since fitting them into a contact means they have to be very small, probably you’d want arrays of each to in order to collect and project enough light. Physics does not obviously rule this out, provided you’re OK with low resolution. But making such an object is clearly well beyond present capabilities. Just having a suitable candidate for a detector is such a tiny part of the problem that it’s practically irrelevant to whether or when such a technology might be realized.

The other thing you might imagine is that the graphene acts as a laser. Light from any direction might then be amplified and keep going in the same direction. So, you put this magic graphene anywhere – in the eye, on top of it, in a pair of glasses or a car windshield – and it just multiplies the light. However, the physics of this fantasy is wrong in fundamental ways, and in fact it has nothing to do with Norris and Zhong’s device.

Note 2: Graphene works as a photodetector much like other semiconductors: photons excite otherwise immobile electrons and create mobile electron-hole pairs. This results in a increase in the electrical conductivity of the material, which can be measured by passing a current through it. Alternatively, a P-N junction can be formed from two layers with different doping, hence a different affinity for the positive holes and negative electrons. The + and – charges are thereby separated, creating a voltage and current (hence, power) source, as in a solar cell. Norris and Zhong’s invention is a two-layer device, but instead of tapping the photogenerated power directly, the upper layer is left isolated, and the conductivity of the lower layer is measured by passing a current through it. Holes then accumulate in the upper layer, and their electrostatic effect on the lower layer, like the gate in a field effect transistor, creates a large change in its conductivity.

UPDATE: Since I first posted this, WIRED has become the latest (and if not the saddest, then it’s a tie) drinker of the UMich Kool-Aid. Mostly rehashing the same lines from the original press release as the other stories, WIRED also reports that Norris and Zhong say something about “car windshields to enhance nighttime driving,” which makes about zero sense. Additionally, WIRED links to this 3-year old blurb from, which suggests that “cat vision” contacts already exist, and were used in the bin Laden raid. The story has so many things wrong with it, I won’t bother to get started. The point, again, is that the tech, and especially military tech reporting world is ripe with this stuff, and sadly, so is the world of policy analysis and of actually funded R&D.

UN Presentation

I am posting a .pdf copy of a presentation I gave to the UN Secretary General’s Advisory Board on Disarmament Matters (6 March) in New York. The designated topic was Emerging Technologies, but the presentation was mainly about the most important emerging technology topic of the day, autonomous weapons. It was very well received by the ABDM, which had originally given me a one-hour slot, but continued the session for more than another hour beyond that, with extensive discussion.