Autonomy without Mystery: Where do you draw the line?

The words “autonomy” and “autonomous” can take on various shades of meaning and lead into fascinating discussions in various contexts, including philosophy, politics, law, sociology, biology and robotics. I propose that we should ignore all of this in the context of autonomous weapon systems (AWS).

We should also not speak of “degrees” or “levels” of autonomy, but seek principles and definitions that point, as directly as possible, to a Yes or No answer to the question “Is this an autonomous weapon?”

In fact, the US Department of Defense has given us a firm foundation for such a definition in its Nov. 2012 Directive on Autonomy in Weapon Systems. According to DoD Directive 3000.09, an autonomous weapon system (AWS) is

A weapon system that, once activated, can select and engage targets without further intervention by a human operator.

I propose that this should be universally accepted as a starting point.

What remains is to clarify the meaning of “select and engage targets” – and also, to negotiate which autonomous weapons might be exempted, and under what conditions, from a general ban on AWS.

Select and Engage

“Engage” is not too hard. Weapons engagement of a target means the commitment of an individual violent action, such as the firing of a gun, the dropping of a bomb or the crashing of a missile into a target, with or without the detonation of a warhead.

It is thus potentially a more fine-grained term than “attack” as used in international humanitarian law (IHL), the law of war. An “attack” might encompass many individual engagements, and accordingly an attack decision by a commander may authorize many engagement decisions, yet to be made, by individual soldiers or subsidiary commanders.

As used in DoDD 3000.09, engagement includes both lethal and non-lethal effects, against either personnel or materiel targets.

“Target” is slightly more difficult, if only because the status of an object as the target of a weapon is an attribute of the weapon system or persons controlling and commanding it, not of the object itself. When exactly does an object become a target?

Fortunately for present purposes, the word “target” appears in the Pentagon’s definition only as an object of the verbs “select” and “engage.” Therefore, we can say that an object becomes a target when it is selected; before that it may be a “potential target,” another term which appears in DoDD 3000.09.

Of course, selection must occur prior to engagement; an object harmed without having been selected is called “collateral damage,” be it a house, a garden, or a person.

The word “select” is where the serious issues hide, and where the clarity of DoDD 3000.09 begins to break down.

Please Select a Target

The Directive defines “target selection” thus:

The determination that an individual target or a specific group of targets is to be engaged.

But what constitutes “determination”? Is this an intelligence finding, a command decision given to the operator of a weapon system, or is it the operator’s command given to the system? It could be any or all of these, but if the operator is the final human in the so-called “kill chain” or “loop,” (and the one person in whom all of these functions may be combined), then the transaction between the operator and the system is where we should focus our attention.

The classic way for a weapon system operator to determine that a target is to be engaged is to throw a rock or spear toward it, thrust a sword or aim and fire a gun. The weapon may go astray, but there is no question that the intended target is selected by the human.

One may imagine that in a remotely operated, non-autonomous weapon system, target selection might mean that humans watch a video feed, and see something that looks, to them, like a valid military objective. A human commander determines (legally and operationally) that it is to be engaged. Using a mouse, joystick or touchscreen, a human operator designates the target to the system. This, plus some other positive action to verify the command, determines – materially – that the target is to be engaged.

However, DoDD 3000.09 admits a far more permissive interpretation of “target selection.” This becomes clear from its definition of “semi-autonomous weapon system (SAWS).”

Two types of SAWS are described. The first are

systems that employ autonomy for engagement-related functions including, but not limited to, acquiring, tracking, and identifying potential targets; cueing potential targets to human operators; prioritizing selected targets; timing of when to fire; or providing terminal guidance to home in on selected targets, provided that human control is retained over the decision to select individual targets and specific target groups for engagement.

This suggests that “target selection” could mean as little as giving a “Go” to engage after the system places a cursor over potential targets it has acquired, tracked, identified, and cued to human operators. Indeed, it is not at all clear what “human control… over the decision” means in this context.

The Meaning of Human Control

Noel Sharkey has discussed “meaningful human control” in terms of five “levels of control”:

  1. human deliberates about a target before initiating any attack
  2. program provides a list of targets and human chooses which to attack
  3. program selects target and human must approve before attack
  4. program selects target and human has restricted time to veto
  5. program selects target and initiates attack without human involvement

Sharkey focuses on deliberation as the thing that fulfills the requirements of IHL and makes “human control” meaningful. In the first level, the human “deliberates” and, like a rifleman, “initiates” the attack. In level 2, the human is more in the position of a senior commander, reviewing a “list of targets” and choosing among them. If no choice is made, there is no attack. However, in view of historical experience and “automation bias,” Sharkey concludes that the meaningfulness of this level of control is contingent. Machine rank-ordering of targets, for example, would tip the scale as “there would be a tendency to accept the top ranked target”. Insufficient information, time pressure, or a distracting environment could also render this level of human control meaningless.

Sharkey rules out level 3 and higher levels of autonomy as not meeting the requirement for human deliberation. However, it is not clear how “program selects target” differs from “program provides a list.” The list would probably not be on paper. Could it be a “gallery” like a dating site? Or a textual list with hyperlinks to intelligence reports, summaries of which open up automatically when the operator hovers over the list item? Could the system provide both a visual display of targets, and a textual list? Wouldn’t the visual display likely convey more information than just a short verbal description?

The best interpretation of the difference between levels 2 and 3 is that in the former, the human “decision maker” is being told “You decide what to target; here are some possibilities.” In the latter, the human is being told “Here are the targets; which ones should be hit, or not?” It’s a subtle difference, and in every technical and operational respect, the two situations might be identical.

Thus, while the focus on human deliberation – and the factors that might encourage and assist, or discourage and hinder it – is important and essential, the effort to enumerate and order a definitive hierarchy of autonomy levels once again leads to further questions.

The desired number of levels is two. We want to know which systems are autonomous.

Similarly, we should seek a definition of “human control” which is free of degrees of meaning. Weapons and conflict must always be under human control, period. Given any configuration of technical systems and human beings, we need to be able to decide whether this principle is satisfied, or not.

Of course, people may have more or less control of a machine, as becomes clear when we think of situations that arise with automobiles or nuclear power plants. But again, it is probably not possible to iron out all the dimensions of complexity to create a definitive hierarchy of control levels. Is it clear that the console providing more data and more knobs equates to more control? Perhaps, in general, but probably not always.

On the other hand, the operators of such machines accept moral and legal responsibilities to maintain control, and are held accountable for the consequences if they fail to do so.

I would propose, similarly, that acceptance of responsibility should be the standard for human control. Thus the principle of human control corresponds with the principle of human responsibility.

Just as commanders are not responsible for crimes committed by soldiers under their command, unless the commander directly ordered the criminal actions, so humans cannot accept responsibility for decisions made by machines. But if humans are to make the decisions, it is their responsibility to maintain control of the machines.

This makes it the responsibility of a human commander to determine whether she has sufficient information to make an attack decision, independent of any machine judgment. It also makes it the responsibility of a human operator to determine whether the system gives him effective control of its actions. Both should be held accountable for any “accidents.” It is not acceptable to say “the trigger was too sensitive” or “the computer said it was a valid target.”

This also suggests a system of accountability as a possible basis for the implementation of an AWS ban and verification of compliance with it.

Short-Circuiting Arms Control

The Pentagon’s distinction between semi-autonomous and autonomous weapons would also fail to support meaningful arms control.

The first type of SAWS includes systems which have every technical capability for fully autonomous target selection and engagement, requiring only a one-bit signal, the “Go” command, to initiate violence. A SAWS of this type could be converted to a fully autonomous weapon system (FAWS = AWS in the Pentagon’s lexicon) by a trivial modification or hack. Thus, permitting their development, production and deployment would mean permitting the perfection and proliferation of autonomous weaponry, with only the thinnest, unverifiable restraint remaining on their use as FAWS.

For the second type, there is not even a thin line to be crossed.

Fire and Forget

The second type of SAWS in DoDD 3000.09 comprises

“Fire and forget” or lock-on-after-launch homing munitions that rely on TTPs to maximize the probability that the only targets within the seeker’s acquisition basket when the seeker activates are those individual targets or specific target groups that have been selected by a human operator.

This means that SAWS, as defined by the Pentagon, would include a “homing munition” that, at some point after its launch, without further human intervention, acquires (i.e. detects the presence of, and roughly locates, via its seeker) a potential target or targets, thereafter tracks it or them, identifies it or them (either as particular objects or members of targeted classes, based on some criteria), and determines that one or more of the potential targets it has acquired and is tracking is to be engaged.

However, the definition (thus the policy), does not regard this as “target selection” by the weapon system. Rather, selection is deemed to have occurred when the human operator launched the weapon, relying on tactics, techniques and procedures (TTPs) to “maximize the probability” that there are no “targets [sic]” which the seeker might mistake for the targets “selected” (i.e., intended) by the human.

The “seeker’s acquisition basket” may be regarded as a product of two functions (which may be further decomposable): the seeker’s inherent response to various objects – as viewed from various directions and distances under various conditions – and any constraints in space and time which are applied by the operator.

For example, a “beyond visual range” (BVR) anti-aircraft missile such as AMRAAM or an “beyond line of sight” (BLOS) anti-ship missile such as LRASM (under development by Lockheed Martin for DARPA) follows a flight path roughly predetermined at the time of its launch, or it may receive updates telling it to follow a slightly different course, on the basis of new information that the target(s) have moved.

At some point, it begins looking for the target(s), that is, “the seeker activates.” This may in fact be a multistage process, as the missile approaches its target(s) and passive radar, active radar, IR and optical sensors successively come within range and tactical appropriateness (e.g., active radar may be avoided for reasons of stealth).

In effect, the seeker’s search area is restricted by the flight plan of the missile, which may also be programmed to deactivate if it fails to locate an appropriate target within a designated area or time frame.

Alternatively, a “loitering missile” or “wide area search munition” may be instructed to search within a designated “kill box.” Examples include Israel Aircraft Industries’s Harpy “suicide drone” which searches for enemy radars based on the signatures of their signals, and Lockheed Martin’s canceled LOCAAS, which was intended to autonomously recognize and attack a variety of target types, including personnel.

This is the meaning of “rely on TTPs”; the operator is supposed to follow procedures which minimize the probability that objects which the seeker might mistake for an intended target will be found within the area and time in which the missile will be searching.

Thus, if the seeker is not itself capable of distinguishing a cruiser from a cruise ship, a tanker plane from a jumbo jet, or a soldier from a civilian, it is the weapon system operator’s responsibility to follow procedures that at least minimize the probability that the wrong kind of object will be found within the seeker’s acquisition basket.

In practice, seekers will, either explicitly or implicitly, map their sensor data, via some computation, to a Bayesian probability that a potential target is an intended target. Since the missile must either engage that target or not, some threshold will have to be set for the decision to identify a target.

Once the decision to engage a particular target has been made, the missile may be considered to have “locked-on” to the target, which will be continually tracked without ambiguity. No new decision need be made, although complications may arise if the system is programmed to abort engagement on some conditions, such as its collection of further information about the target, or the possible presence of civilians.

Simpler missiles, in contrast must “lock on” before launch, under direct human control, and with a human operator selecting the target. If the target succeeds in evasive countermeasures, they may “break lock,” and then generally cannot re-acquire the target.

The Necessity of Decision

From this discussion, it becomes clear that the “lock-on-after-launch homing munition” does, in fact, make lethal decisions autonomously – it must decide whether to accept particular sensor data as the signature of an intended target, and to engage that target, or which one to engage if there are more than one; else it may continue searching, or decide to abort its mission.

It must make these decisions without further intervention by a human operator. The authors of DoDD 3000.09 may have wished to avoid using the word “select” in this context, but particularly in the case that there are multiple objects which might be engaged, it is clearly the weapon that makes the final determination that a particular potential target “is to be engaged.”

It also becomes clear that the system which autonomously decides whether, and which potential targets to engage, is not being controlled by humans when it does so. It is controlled by environmental conditions which may be somewhat unpredictable, and by its programming – that is, by itself. It is an autonomous system. There is no mystery in this; it is simply an operational fact.

Some may argue that the program was written by humans, but this position makes machine autonomy impossible, since machines owe their entire existence to humans. Besides, it is by no means certain that, even today, let alone in the future, software will be written and systems engineered solely by humans. In reality, the complexity of present-day technology already precludes meaningful human control or accountability for aberrant behavior. No one is going to be court-martialed for a bug.

The classification of “fire-and-forget or lock-on-after-launch homing munitions” as semi-autonomous may be convenient for the Pentagon, and some AWS ban proponents may prefer to accept it as well, in order to avoid the fact that many existing missile and some underwater systems fall into this category. However, this is an attempt to avoid the unavoidable, because not only do these systems already cross the line of de facto autonomous target selection and engagement, but there is no recess in the domain of fully autonomous weapons which is not accessible by further extension of this paradigm. To put it bluntly and colorfully, this is a loophole The Terminator could walk through.

Inappropriate Levels of Robot Judgment

A missile like LRASM may not fulfill the popular conception of a “killer robot,” but as depicted in a video published by Lockheed Martin, it is designed not only to autonomously maneuver around “pop up threats” posed by enemy radar and antimissile systems, and autonomously recognize the “hostile surface combat group” it was sent to attack, but also to discriminate the various ships in that group from one another, and determine which to engage. Several missiles might be sent in a salvo, and might coordinate with each other to determine an optimum target assignment and plan of attack.

The key fact is that these missiles would make final targeting decisions, based on fairly complicated criteria. DoDD 3000.09 places no upper limit on the sophistication of the sensors and computers or complexity of the algorithms and criteria for “lock-on-after-launch,” nor are “homing munitions” obviously limited to flying objects. If ships may be identified on the basis of signatures in their radar returns, might targeted killing not be carried out on the basis of face recognition and other biometrics?

More generally, might combatants not be identified by signatures of their behavior, dress, or any other cues that human soldiers rely on to distinguish them from civilians? Suppose a robot is sent out that is programmed to fire upon anyone who matches the biometric signatures of certain known hostile persons, or who is determined to be taking specific hostile actions. Is this not a fully autonomous weapon system?

The central conceit of DoDD 3000.09 is that “target selection” will be done only by humans or by autonomous weapon systems which are either anti-materiel weapons with non-kinetic effects, or else have been explicitly approved by senior officials; and in all cases, commanders shall exercise “appropriate levels of human judgment in the use of force.” But is there any limit to the levels of robot judgment that may be appropriately exercised by “semi-autonomous weapon systems”? DoDD 3000.09 provides no answer.

The negotiation of limits on SAWS is thus unavoidable if the project of a ban on FAWS is to succeed. It must be acknowledged that the distinction of the first kind of SAWS from FAWS is so thin as to be meaningless, both from the point of view of meaningful human control and of effective arms control. The distinction of the second kind of SAWS from FAWS is nonexistent.

The Line Drawn

I conclude that the first kind of SAWS must be subject to restrictions and accountability requirements if a ban on FAWS is to be meaningful; and the second type of SAWS must be acknowledged as actually a type of FAWS.

Is it too much to ask for a ban on missiles like AMRAAM and the autonomous Brimstone [see endnote], which already exist and have been used in war, or even missiles like LRASM which are close to reality? Perhaps, but arms control in the past has often entailed the scrapping of existing systems. Israel has already offered a successor to Harpy called Harop, which adds an electro-optical system and two-way control link so that the system is not necessarily fully autonomous.

It is also possible to say “No autonomous weapons, except these….” If it turns out political circumstances dictate that certain things must be grandfathered, the negotiation of explicit allowances and limits is preferable to leaving open a road to the perfection and proliferation of autonomous weapons of every kind. We should not accept a diplomatic fig leaf behind which an autonomous weapons arms race can rage unrestrained.

The point of such negotiations should not be to draw a line defining autonomy; that has already been done. A system is autonomous if it is operating without further human intervention. But a “ban on autonomous weapons” might provide explicit exceptions either for particular systems or particular classes of systems, based on detailed descriptions of their characteristics. Philosophical clarity is not the issue, for we have already achieved that. From this point forward, it’s just old-fashioned horse trading.

 


Endnote. The Peace Research Institute of Oslo (PRIO) has recently entered this discussion with a briefing paper on “Defining the Scope of Autonomy: Issues for the Campaign to Stop Killer Robots,” authored by PRIO researcher Nicholas Marsh, whose past work has focused on the arms trade and small arms in civil war. In this report, Marsh discusses at length the problem of fire-and-forget missiles such as MBDA’s Brimstone, which has both man-in-the-loop and fully autonomous modes. The latter capability, Marsh points out, has already been used in the Libya campaign.

Marsh also discusses the British Ministry of Defense’s statement about autonomous weapons, contained in its 2011 Joint Doctrine Note on The UK Approach to Unmanned Aircraft Systems. I think he places a bit too much emphasis on the statements on page 2-3 of the report about autonomous systems being “capable of understanding higher
level intent and direction” and being “in effect… self-aware” and “capable of achieving the same level of situational awareness as a human.” Whatever the authors of these statements had in mind – before the emergence of this issue into the public sphere – the British government has since issued statements indicating that it understands the issue of autonomous weapons in terms of human control, and that “the MoD currently has no intention to develop systems that operate without human intervention in the weapon command and control chain”. As Article36.org points out, this still leaves a good deal of ambiguity and wiggle room for the MoD, which is, as Marsh demonstrates, actually pursuing weapons and policies similar to those of the United States.

Marsh, like the Campaign itself to a large degree, grounds the issue in humanitarian disarmament, which he states “is designed to reduce human suffering rather than to manage the affairs of great powers.” Yet this prescriptive distinction is evidently problematic, since if humanitarian disarmament is to be effective in reducing human suffering, it must effectively restrict the actions of powers great and small; nor are these restrictions very different, other than in the particular weapons covered, from treaties not framed as ‘humanitarian’ in origin, such as strategic or conventional arms limits and bans.

In fact, most of Marsh’s paper points, whether intentionally or not, to the inadequacy of a model of humanitarian disarmament rooted only in the landmines and cluster munitions bans. These treaties do a great deal to reduce civilian suffering in war and its aftermath, but their restriction on the freedom of action of the major powers is relatively marginal – and those same powers nevertheless have largely refused to sign on, although their actions are to some extent inhibited by the norms that have been established.

In the case of autonomous weapons, we are seeking to call off the next great arms race.

Autonomous weapons are potentially the most important and foundation-shaking development in military and global security affairs in decades. They are not merely a threat to human security from their potential for indiscriminate slaughter, but are even a threat to international peace and security.

Therefore, we must address the issue in its widest scope, and on the basis of the strongest possible foundations: the principles of human control, responsibility, dignity, and sovereignty, and that conflict is human – and must not become autonomous.

Is Russia leading the robot arms race? Not really.

bearwoods.5Is that an autonomous bear in the woods?

Russia has been getting a lot of bad press lately, much of it richly deserved, IMHO. Since this opinion is widely shared, it might be tempting to try to pin every kind of villainy on Vladimir Putin, especially if the goal is to vilify the villainy by associating it with a known villain, rather than the other way around.

So let me be clear: this isn’t about fairness, and it isn’t any enthusiasm on my part for Russian aggression nor, lord knows, for autonomous weapon systems (AWS). But given the evidence that I have seen, I think it would be a bit premature to credit Russia, as David Hambling did in New Scientist, with “taking the lead in a new robotic arms race” while the “squeamish” West holds back, or to accuse Russia of “norm anti-preneurship” aimed at disrupting “international norm-building efforts to regulate the deployment of fully autonomous weapons,” as UMass Prof. Charli Carpenter did in her blog, citing Hambling’s report. [Note: After I emailed her about the unreliability of Hambling’s spin, Prof. Carpenter updated her post to reflect “uncertainty about the nature of Russia’s actual announcement”. Besides, she's done great work on this issue.]

No, I’m not out to defend Russia, which if not actually taking the lead, is openly declaring its intent not to be left behind. Last June, Deputy Prime Minister Dmitry Rogozin, in charge of the defense industry, announced the decision to establish a national laboratory for military robotics, presumably similar to the US Navy’s Laboratory for Autonomous rogozinSystems Research, which opened in 2012. Rogozin was previously associated with the push to create a Russian version of DARPA, the agency that oversees a great deal of the Pentagon’s cutting-edge robotics research.  In December, Rogozin announced plans to spend $640 billion on military modernization through 2020, with an emphasis on robotics. Most disturbingly, in a March 21, 2014 article in Rossiya Gazeta, Rogozin summoned Russian industry to “create robotic systems that are fully integrated in the command and control system, capable not only to gather intelligence and to receive from the other components of the combat system, but also on their own strike [emphasis added].”

The risk here is that tarring Russia as the Mordor of the robot orcs will contribute to the misperception that the US and its allies are any less bullish on AWS than Russia has lately advertised itself as being. This could be harmful in two ways: It could bolster America’s own full-speed-ahead policy for development and use of AWS – which, contrary to popular belief, does not slow the development of fully autonomous weapons – and it could push Russia into a defensive corner in diplomatic discussions, such as those upcoming next week in Geneva (which both Charli and I will be attending). Russia will be a participant in those discussions, and previous Russian statements at the UN have been supportive of emerging public concerns about the issue, though not any particular norms.

I can hear it already: “You want to ban killer robots? Tell that to Putin!” Of course, China also serves this rhetorical purpose, and those wishing to point the finger that way can cite Lijian, a stealth drone that looks a lot like the X47B but as far as we know has not yet landed on an aircraft carrier, or Norinco’s new GS1-AT “smart” cluster munition, which resembles an artillery version of the sensor-fuzed weapon that the US has had in service since the 1990s.

In comparison, despite a history with drones dating to the 1950s, Russia “lags behind other militaries in building unmanned aerial combat vehicles, according to U.S. officials”—and when Bill Gertz reports that, it’s reliable. In June, Defense Minister Sergei Shoigu bemoaned Russia’s “technological backwardness” and “shortage of skilled personnel” and called Russian military robots “inferior to their foreign analogs.” Putting an optimistic spin on things, one Russian official told Pravda.ru “From the point of view of theory, engineering and design ideas, we are not in the last place in the world.” Not satisfied with that, Defense Minister Shoigu called for doubling the pace of “developing the combat robotic equipment.”

altius-mRussia is expected to soon begin testing an armed drone comparable the Reaper, but it is not expected to be deployed before 2016. I have not found any reports of actual Russian unmanned maritime systems, only talk. In unmanned ground vehicles (UGVs, appropriately enough) the Soviets first experimented with “teletanks” in the early 1930s. The present story stems from Hambling’s account in New Scientist of reports that Russia may soon use armed UGVs to help guard its ICBM sites.

Мобильный робототехнический комплекс

On March 12, Maj. Dmitry Andreev, a spokesman for the Strategic Rocket Forces (RVSN) told RIA Novosti (in Russian) that

“In March, the Strategic Missile Forces began to explore issues of the application of mobile robotic system (military) (MRK BH), created for the protection and defense of the Strategic Missile Forces facilities”

This was part of “retooling” planned for 2014 at five sites “for new types of security systems, including the use of modern technological advance in the development of robotic systems.” RIA’s English version of the story seemed less tentative, stating that “testing” began in March, and the bots “will be deployed at five ballistic missile launch sites.”

TASS (in English) quoted Andreev as saying the MRK BH is designed for

“reconnaissance, spotting and destroying stationary and moving targets, fire support of military units, patrolling and protection of important facilities.”

Furthermore, according to TASS,

It can provide an option to conduct combat actions in the nighttime without de-masking factors and an option of aiming weapons, tracking and hitting targets in automatic and semi-automatic control modes. The advanced combat system is equipped with optical electronic and radar reconnaissance stations.

TASS’s Russian version does not differ significantly in these details. A search in Russian and English did not turn up other independent reports of Andreev’s statement, and in an email to me, Hambling confirmed that he did not have additional details of the statement.

mrk topolYou might want to worry more about the big thing in the background.

A month later, Novosti VPK, (literally “Military Industrial Complex News”—Russians apparently use the term without irony) reported that tests of new security equipment, including the MRK BH as well as a larger manned system, would be conducted on April 21 and 22 at Serpukhov Military Institute, a branch of the RSVN Academy.  Russian TV news reports posted that day, as well as a YouTube video posted on April 25, show an apparent demo-expo of the new security equipment. A man in a black suit grabs hold of a joystick and presses a red button. The robot’s engine starts and it drives off. A squad of soldiers, guns raised, pad behind the MRK. The MRK leads a convoy of transporter-erector-launchers (which shuttle the nuke-lobbing missiles around to frustrate targeting by you-know-who). The MRK looks up, down, left and right. At a target range, it fires its machine gun.

“Mobile robotechnic complex” or “mobile robotic system” (abbreviated variously as MRC, MRK, MPK or MRTK) is actually generic Russian terminology for mobile robots, mostly unarmed, made by several companies. MRK BH appears to be a variant of the MRK-002-BG-57, the sole robotics product listed on the website of the Izhevsk Radio Plant (IRZ), and demonstrated at the Russia Arms Expo last September. A few minor changes are visible; five small housings at the front and sides, possibly for cameras, have been removed, and replaced by four forward-directed headlight batteries and a swivel-mounted camera just below the gun mount (which has its own optical and infrared sighting cameras). A padded skirt, possibly armored, has been fitted around the chassis. IRZ reports the system weighs approximately 1 ton, and has “cruising range” of 250 km at up to 35 km/hr., implying it is powered by a gasoline or diesel engine, audible in some of the videos. It also has batteries that run down after 10 hours, or 7 days in “sleep mode.” A Russian Wikipedia page says the chassis measures 1.7 x 3.7 meters.

All of the videos show the remote control unit, which reportedly has 5 km range. There is no doubt that the system can be fully controlled by a human operator. The interesting question is, Does it also have capabilities for lethal autonomy? If so, how do they work, and how might Russia use them? How does this compare with things that the US and its allies are doing?

How scary is that bear?

Kevin Fogarty in Dice, citing the same RIA and TASS reports, reported that the robots “are able to select and fire on targets automatically.” The word “select” here echoes the language of the Pentagon’s policy directive, which defines target selection, rather vaguely, as “The determination that an individual target or a specific group of targets is to be engaged.” Under the US policy, this is the key thing which a robot is forbidden to do – except when it is allowed to do it. Drawing the comparison more sharply, Hambling wrote 

These robots can detect and destroy targets, without human involvement…. Andreyev describes the robots as being able to engage targets in automatic as well as semi-automatic control mode. US policy, on the other hand, says a person has to authorise when weapons are fired. Drones don’t fire missiles on their own, but act as remote launch platforms for human operators.

First, while the statement about drones (today) is accurate, the one about US policy is not. The policy green-lights the development, acquisition and use of weapons which have every technical capability for autonomous target selection and engagement (which includes killing people). Certain general criteria have to be met, similar to criteria set forth for other high-tech weapon systems, and if the system is intended to be autonomously lethal, three senior officials must “ensure” that it meets those criteria. If it does, the system can be approved, and could be turned loose to hunt and kill humans.

In fact the US has a number of systems in use, such as Patriot, THAAD, C-RAM and Aegis, which can make engagement decisions autonomously. These systems are defensive, and do not specifically target humans, but Patriot killed pilots in friendly-fire incidents during the Iraq invasion, and Aegis, operating semi-autonomously, was involved in the Iran Air 655 tragedy. On the offensive side, missile systems are in development which are intended to identify their targets fully autonomously, and would kill people because they would target ships, tanks, planes, or… people. These missiles may be considered “semi-autonomous” under the policy, and thus exempt from senior review. Operators of these systems are supposed to apply tactics, techniques and procedures to ensure that the missiles don’t go after the wrong targets.

Furthermore, if a system is not intended to be used as a fully autonomous lethal weapon system, it can have every technical capability needed to do so, e.g. it can acquire, track, identify, and prioritize potential target groups, and control firing, and still be considered semi-autonomous provided it asks for permission before actually firing. Obviously, a trivial hack would make this a fully autonomous weapon system, and developing such systems means perfecting the technology of fully autonomous weapons.

OK, OK, but what about the bear?

Both Hambling and Fogarty appear to assume that the Russian statement about “automatic and semi-automatic control mode” means the same thing as the Pentagon’s “autonomous and semi-autonomous weapon systems.” But does it?

According to IRZ, the “Robotic system has an automatic capture and the ability to conduct up to ten goals in motion. The aim is held when moving the turntable by 360 degrees.” Another page states that it can “Automatically capture and manage objectives in motion (target is held while moving the turntable 360 ​​degrees).” Together, these two statements indicate that “automatic mode” involves target tracking and automatic fire control while the robot is in motion, but not necessarily autonomous target acquisition and selection, in the Pentagon’s terms.

Further evidence can be extracted from the previously mentioned Russian TV reports of the April tests. There are actually two segments accessible at this url, which run slugin MRK.3consecutively. In the first, the suited man, IRZ Deputy Director General Alexei Slugin, explains that “for controlling the combat module, we have a touchscreen installed here, where we can set up to 10 targets with our finger, and the targets are then automatically followed in automatic mode… the targets are captured and held.”

In the second segment, the reporter talks to Slugin, then explains that “This is a touchscreen, where you can choose up to 10 targets, then hit the button allowing fire, and the machine will commence destroying the enemy.” He emphasizes that the machine performs “under strict control of the operator. The final decision about the destruction of the target is made by the human.” A normally-covered toggle switch apparently must be uncovered and thrown for the destruction to commence.

However, following him, RSVN officer Sergei Kotlyar explains that “All automated, automatic systems, especially military systems, are piloted only by a human, only the human makes the decision. Other than that, it is fully robotic, and when performing a narrow task, e.g., if it is known that the enemy is firing or the enemy is present, there it will be using firearms itself, calculating targets and firing at them.” This suggests that the system may, indeed, have some capability to autonomously select and engage targets. But this capability is likely crude and indiscriminate. The Russian military, like the US, is sensitive about this, and maintains that the fully autonomous lethality would be activated only in hot combat.

So does this mean Russia is the bad boy on the block?

Would fielding an armed UGV with a full lethal autonomy capability, even if that capability were normally not activated, be a significant step beyond the policies, practices and plans of the US and its allies? Not if the Pentagon policy directive is given a permissive reading – and note that, since the policy is only an internal DoD directive, the Pentagon is free to read it (or not read it) any way it chooses. It is also quite similar to what South Korea and Israel are doing with stationary sentry robots.

In fact, if there is any emerging international norm, it is towards what US Navy scientist John Canning called “dial-a-level autonomy,” i.e. systems that have both human control consoles and communication links for normal operation, plus full autonomous capability for when things get real. Thus Harpy, Israel’s loitering, fully autonomous, radar-hunting suicide drone is being superseded by Harop, which adds a two-way radio control link and electro-optical system. Similarly, following the cancellation of the fully autonomous LOCAAS loitering missile, Lockheed Martin offered SMACM, with the same multimode sensors capable of autonomous target recognition, but adding a two-way radio control link.

Hambling argues that the US has not actually deployed armed UGVs, autonomous or otherwise, despite their having been in development for “decades.” He cites the 2007 trial of three SWORDS robots in Iraq, which was “cancelled,” according to Hambling, due to “uncommanded or unexpected movements.” However, Popular Mechanics, whose initial report was widely misrepresented as having said the bots were withdrawn after swinging their guns at soldiers or even injuring them, later ran statements from SWORDS manufacturer Foster-Miller/QinetiQ saying that SWORDS was not cancelled; the government had never funded more than three of them. Furthermore, the “unexpected movements” had occurred in earlier, stateside tests, and the problems had been fixed. According to a 2013 report from the National Defense Industrial Association, the SWORDSes were in Iraq for six years and “performed a combat role,” specifically “perimeter defense.” It was claimed that they had “saved lives.”

SWORDS was never deployed as envisioned. They carried M249 light machine guns, but were placed in fixed locations, and did not move, according to reports in 2008. Operational concepts would have had them going around buildings to shoot at snipers or other enemy combatants, without exposing soldiers to deadly fire. However, senior military officials at the time did not feel comfortable using them in that manner, and they were placed behind sandbags.  [NDIA report]

There could be any number of reasons why the Army chose not to drive SWORDS out on Iraqi streets, including the risk of bad PR both locally and globally – especially if they ended up killing someone accidentally – and because their tactical usefulness is probably very limited. They are heavy, easily damaged, and would need to be carried over obstacles. Lacking peripheral vision, directional hearing, or the instinct and ability to react quickly, they would be sitting ducks in urban combat.

maarsAnd today, MAARS, son of SWORDS, lives on. In recent years, it has been tested by the Marine Corps and by the Army, although no procurement contracts have been announced. We know that QinetiQ has sold at least one to the US government, and the wording of its MAARS homepage and data sheet strongly suggest that US Special Operations Command (SOCOM) is a prime target customer, so it would not be surprising if some larger number have been acquired under the black budget.

spetzbotIn that regard, Hambling also pointed me to yet another Russian YouTube video, this one a promo for FSB Spetznaz, Russia’s own version of SOCOM. It shows a series of staged vignettes, which don’t even look like serious training exercises, in which shock troops kick the butts of Chechen-looking types. In the first sequence, they storm a house with the help of a small MAARS-like robot, suggesting that such weapons might already be a standard feature of their operational arsenal (although this one may be a movie prop). However, it seems extremely unlikely that gunbots would be firing autonomously in such a scenario, with troopers running in and out and around, nicely choreographed in the video but more likely frantic in real life.

Is there any conclusion?

So now, we have a video which suggests that Russian special ops may have at their disposal small armed UGVs similar to the ones that American special ops may have had since at least 2008, and we have official reports that Russian missile troops are testing armed UGVs in a role similar to the one that American troops tested for six years in Iraq. Does that make Russia a world leader in killer robots?

Arguably it might, if in fact the Russian missile site robots are substantially larger in number or herald an imminent rapid expansion in the use of such systems, or if the RSVN intends to routinely turn the robots loose with full lethal autonomy.

But the latter seems very unlikely, both because the Russians deny it and because, if you did have MRKs patrolling missile bases in fully autonomous lethal mode, they would, with near certainty, kill people. They would kill drunken soldiers, officers whose drivers misread the map, and technicians sent to find out why an MRK was unresponsive. Sooner or later, and probably sooner, somebody would get shot. There is no good reason why the RSVN would do this. It would be insane.

It may also be premature to assume that current testing of the MRK BH will lead to its permanent adoption by the RSVN, let alone a major and rapid expansion of UGV deployment by the Russian military. The aforementioned Russian Wikipedia article (6 May 2014) is openly scornful of the MRK BH, saying it is poorly designed “to work as part of the division tactical units,” that IRZ lacks experience in military robotics, and that the design harks back to Russian and German remotely operated tanks of the 1940s. While this is only anonymous commentary, it suggests that at least some Russian military observers are skeptical and that the MRK BH may have drawbacks that will cause RSVN to reject it.

It does appear that overall, Russian industry is still sadly mired in post-Soviet mediocrity, while Putiniks are now demanding modern technology. IRZ seems to have rushed a crude model into early production, and it has been put on display as a symbol of new progress, but what it mostly reveals is not so much Russian aggression (the military officers interviewed in the videos all seem ambivalent) but rather a grim determination to meet whatever challenges a new day brings. If there is going to be a robotic arms race, Russia will not take last place in the world.

Meanwhile, the MRK BH is already world famous, so it may as well become a source of pride for Russian military robot enthusiasts. For example, the weekly “Military Industrial Courier” (vpk-news.ru) reported that “Russia took the leading position in the world in the key area of ​​advanced weapons – ground combat robots.” Its source? David Hambling’s story in “the influential British science weekly” New Scientist.

I am deeply indebted to Tanya Lokot for translation of the Russian videos.

Bursting one tiny bubble of emerging military tech hype

“Two University of Michigan scientists are on the way to developing night vision contact lenses,” read the tweet, posted by a senior fellow for defense policy at a top think-tank, and retweeted by dozens, including a level-headed defense technology journalist famous for her own investigations into federally-funded fringe science and folly. I figured there must be something to this, so I clicked on the link to a story at defensetech.org, whose headline made the even more surprising claim “Scientists Develop Night Vision Contact Lens.”

Below the headline, a photo showed some guy pinching a contact with what looked like printed wiring on it. The photo was my first clue that something was screwy, because I’d seen that photo before; it was the Google lens (google it), a contact lens outfitted with a glucose sensor for diabetics.

The story claimed that the scientists had “developed a prototype contact lens that enhances night vision by placing a thin strip of graphene between layers of glass. The graphene — a form of carbon — reacts to photons, which makes dark images look brighter.” What were they claiming—that the graphene, like a laser, simply amplified the light? That didn’t make any sense, and neither did any other interpretation, because a contact lens is roughly located at the aperture of the eye, not the focal plane. The wearer would see only a blur [Note 1].

I thought this had to be an April Fool’s joke. But a quick googling of “Michigan graphene contact lens” found essentially the same story reported at an indeterminable number of sites around the web. Most of these stories seemed to rely on two sources: The University of Michigan’s press release from March 16, and IEEE Spectrum’s report from the next day, which drew on the press release and also referenced the paper in Nature Nanotechnology by professors T. B. Norris and Zhaohui Zhong, published online on the 16th. Sadly, Spectrum seemed to be the source of the claim that what Norris and Zhong had done was to actually “create their infrared contact lens.” But unsurprisingly, Zhong himself, if accurately quoted by the press release, had been the originator of the “contact lens” hype.

What the researchers actually created was far more mundane: a single-pixel test cell using graphene. The authors state that graphene “is a promising candidate material for ultra-broadband photodetectors, as its absorption spectrum covers the entire ultraviolet to far-infrared range.” It has been studied for this role since at least 2008, according to the references. However, its sensitivity in past experiments has been poor. Norris and Zhong have improved on this with some clever quantum electronics engineering [Note 2]. The result is “room-temperature mid-infrared responsivity comparable with state-of-the-art infrared photodetectors operating at low temperature.” But not quite “superhero vision,” as Discovery News put it.

Norris and Zhong’s research is important because it shows that graphene can potentially be used to create very compact thermal imaging sensors. “We can make the entire design super-thin,” says Zhong in the press release. “It can be stacked on a contact lens or integrated with a cell phone.” But first, they have to make an actual imaging sensor. They think they might be able to do something in a few years. It will require integrating patterned, doped graphene with conventional silicon lithography, a major challenge in the lab let alone for industrial production.

And while a small camera, even one small enough to fit into a phone, is not an unreasonable goal for perhaps a decade down the road, the idea of putting this into a contact lens is hard science fiction at best. To do this, you would basically have to make the entire camera, or perhaps an array of them, small enough to fit onto the contact lens. Plus, you would need a micro-projector, or again perhaps an array, small enough to fit under the camera(s), which would project the image as visible light onto the retina. Plus electronics and a power source, all small enough to fit on a contact lens. If Norris and Zhong knew how to do all this – let alone how to manufacture such a thing – I would not hesitate to declare them the greatest engineers in history.

But you gotta have vision, and you gotta have funding. These days, that means you gotta have hype. And hype echoes through the media machine, from press release to sloppy science reporting to outlets like Popular Science, CBS News, Independent, Huffington Post, and scads of fleas with smaller fleas on them.

To be fair, not all of these reports are technically inaccurate, but they all serve up the hype, because the hype is the story. The actual news is just an incremental advance in one of many ways of converting infrared radiation to an electrical signal. If some day we have the technology to make a night vision contact lens, it might make use of Norris and Zhong’s graphene trick. But that is not particularly likely, since there are so many other possible schemes. To say that their work opens the way, or is the first step which may some day lead to such a capability, is simply not true. I’m not sure where the line lies where hype is so inflated that it becomes lying. But I feel that in this case it has been crossed.

Why is this worth my time to write, and yours, dear reader, to read? Because it’s a paradigm example of how the emerging tech, especially emerging military tech, hype machine works… and while this little bubble is rather insignificant in itself, it sits atop a churning foam of bubbles big and small, some of them big enough to have real consequences. Things like laser weapons, “Iron Man” body suits, missile defense…

The inability of professional journalists, analysts and bureaucrats to separate hype from realism and sense from nonsense in military technology, and the relative absence of critical technical review, leads not only to massive waste but also to destabilizing suspicions and needless arms races which lead to … what? Nothing good, that’s for sure.

Note 1: In a camera (eye), light arrives at the aperture (iris) from every direction, and the camera geometry sorts out light coming from different directions. A detector located at the aperture, e.g. in a contact lens, would be exposed to light from all directions, and could not form an image. Also, a tiny display screen mounted in a contact would flood the eye with its light and produce a uniform blur. So neither end of this works. What you’d need is an even tinier camera, and a tiny projector (onto the retina). Since fitting them into a contact means they have to be very small, probably you’d want arrays of each to in order to collect and project enough light. Physics does not obviously rule this out, provided you’re OK with low resolution. But making such an object is clearly well beyond present capabilities. Just having a suitable candidate for a detector is such a tiny part of the problem that it’s practically irrelevant to whether or when such a technology might be realized.

The other thing you might imagine is that the graphene acts as a laser. Light from any direction might then be amplified and keep going in the same direction. So, you put this magic graphene anywhere – in the eye, on top of it, in a pair of glasses or a car windshield – and it just multiplies the light. However, the physics of this fantasy is wrong in fundamental ways, and in fact it has nothing to do with Norris and Zhong’s device.

Note 2: Graphene works as a photodetector much like other semiconductors: photons excite otherwise immobile electrons and create mobile electron-hole pairs. This results in a increase in the electrical conductivity of the material, which can be measured by passing a current through it. Alternatively, a P-N junction can be formed from two layers with different doping, hence a different affinity for the positive holes and negative electrons. The + and – charges are thereby separated, creating a voltage and current (hence, power) source, as in a solar cell. Norris and Zhong’s invention is a two-layer device, but instead of tapping the photogenerated power directly, the upper layer is left isolated, and the conductivity of the lower layer is measured by passing a current through it. Holes then accumulate in the upper layer, and their electrostatic effect on the lower layer, like the gate in a field effect transistor, creates a large change in its conductivity.

UPDATE: Since I first posted this, WIRED has become the latest (and if not the saddest, then it’s a tie) drinker of the UMich Kool-Aid. Mostly rehashing the same lines from the original press release as the other stories, WIRED also reports that Norris and Zhong say something about “car windshields to enhance nighttime driving,” which makes about zero sense. Additionally, WIRED links to this 3-year old blurb from Military.com, which suggests that “cat vision” contacts already exist, and were used in the bin Laden raid. The story has so many things wrong with it, I won’t bother to get started. The point, again, is that the tech, and especially military tech reporting world is ripe with this stuff, and sadly, so is the world of policy analysis and of actually funded R&D.

UN Presentation

I am posting a .pdf copy of a presentation I gave to the UN Secretary General’s Advisory Board on Disarmament Matters (6 March) in New York. The designated topic was Emerging Technologies, but the presentation was mainly about the most important emerging technology topic of the day, autonomous weapons. It was very well received by the ABDM, which had originally given me a one-hour slot, but continued the session for more than another hour beyond that, with extensive discussion.

US killer robot policy: Full speed ahead

My analysis of US policy for autonomous weapons, and how we got here, has been posted at the Bulletin of the Atomic Scientists’ website. In the first days after DoD Directive 3000.09 was released in Nov. 2012, I had posted a somewhat less discerning view at icrac.net. Like many people, I did not at first fully understand how the Directive was constructed, and what its implications were and are. So, in case anyone accuses me of inconsistency, I was wrong.