Killer Robots are Just Landmines that Walk

As the Group of Governmental Experts (GGE) in Lethal Autonomous Weapons Systems (LAWS) convenes again in Geneva under the United Nations Convention on Conventional Weapons (CCW – and that’s it, I promise), I want to again suggest a working definition of LAWS, suitable for negotiation and treaty language, drawn from the way that the Ottawa Convention banning antipersonnel landmines defines those weapons in general:

‘Anti-personnel mine’ means a mine designed to be exploded by the presence, proximity or contact of a person and that will incapacitate, injure or kill one or more persons.

This definition can be parsed into two halves. The second part describes the lethal effects of the weapon on persons. The first part is more interesting to us. It describes the mine as being “designed to be exploded by the presence, proximity or contact of a person.” In other words, rather than being triggered by designated and accountable “operators,” mines are triggered by their victims.

In fact, if autonomy is defined broadly as acting without human control, guidance or assistance, mines should be considered LAWS. Legacy mines are extremely simple in comparison to robots and artificial intelligence, but there is no reason that more advanced landmine systems would not incorporate AI to more accurately discriminate targets and perhaps perhaps tailor responses to varied situations. Surely these would be of interest as potential LAWS.

Building on Precedent

A 2005 recommendation of the Norwegian Ministry of Finance’s Advisory Council on Ethics established a precedent that such a networked system with sophisticated sensors and weapons, intended to replace legacy landmine systems, would be considered Ottawa-compliant only if an operator is always required to trigger a detonation or other kill mechanism. If throwing a “battlefield override switch” would enable lethal autonomy, as a fallback for when stuff gets real, as they say, then the system would be considered a banned landmine system.

The same precedent must apply to any LAWS ban if it is to be effective in safeguarding peace. Otherwise the “ban” would only establish a norm that the most dangerous forms of autonomy may be turned on precisely when they are most dangerous–in a crisis or hot war.

The CCW LAWS negotiations may wish to exclude landmines, as already addressed by other protocols and treaties, but can they exclude any consideration of advanced, networked and AI-driven systems? In either case, we can start with the Ottawa definition of landmines and generalize it to define LAWS as victim- or target- or condition- triggered weapons, in contrast with weapons triggered by humans:

‘Lethal autonomous weapons system’ means a lethal weapons system triggered by a target or condition, rather than by a human operator. 

The principal advantage of this definition is that it avoids concepts such as target selection and “critical functions” in the targeting cycle, and is drawn instead in terms of the system’s observable behavior.

Triggering

This statement of the definition achieves brevity by loading a lot into its key concept: triggering. But this is a familiar, simple, intuitive concept.

When we trigger a weapon, we have already aimed it and know what we are aiming at, or at least, if we are acting responsibly, we believe we do. Aiming is not just telling a weapon what to hit, it entails a mechanical interaction, a control loop that includes primary data from senses and sensors – from which the shooter derives her internal representation of an acquired target. Human decision making is not just in the loop; it is the actual point of decision, the point at which everything hangs waiting until the moment the person pulls the trigger. And when we pull the trigger, we are deciding to take immediate action, with minimal delay. Triggering is an act of commitment, and while it remains an obligation, when possible, to rescind an erroneous command and interrupt an erroneous attack, in taking the action of triggering we are affirming the decision to attack and accepting responsibility for the consequences of that action.

Triggering initiates an action which has been prepared, specifically the action of a weapon. Triggering can only occur after targets have been acquired, or found and fixed if you prefer that lexicon, identified, and weapons aimed and armed. After triggering, a mechanical process may unfold which may include trivial, technical decision points as conditions in a weapon’s firing sequence are fulfilled. Weapons may even be allowed to control fire and timing within strict limits. However, this level of machine decision will seek only to execute and optimize attacks that have been triggered against particular acquired targets, and never to trigger attacks against newly-acquired targets.

A weapons system is ‘triggered by a human operator’ if attack is initiated only by an unambiguous signal normally generated by an action of the operator. An interactive confirmation process, which may include a short time delay, may be used to ensure positive control and accountability, but whenever the trigger signal shall be considered to have been confirmed, at that time, and without further delay beyond its internal limitations of speed, the system will commence to execute attacks on particular acquired targets or against specified locations known to the human operator at the time of triggering.

Thus defined, “triggering by a human operator” is objectively demonstrable, observable and verifiable, from two aspects: 1) it can be shown and seen that the weapon responds to a human triggering action in an inspection, demonstration or exercise, and 2) human control, including triggering, can be verified by presentation of cryptographically-authenticated records documenting responsible human control, including the issuance of a triggering command and the human physical action which constituted it. Such records should also normally include any relevant orders from human commanders, such as the order to attack particular targets any data presented to the operator and commander(s) on which basis these humans either selected the target for engagement or identified the target with a described, previously-selected target, and aimed and triggered the weapon.

In taking the triggering action, the operator accepts responsibility for initiating attacks on the particular acquired targets or specified locations known to her at that time. This action will normally be pursuant to the authority of a human commander, who may be the same person.

It is crucial to insist that autonomous target identification is autonomous target selection, because an acquired target only comes into existence when a targeting system acquires a target, that is, a signal seen in real data from the real world in real time. The act of identifying an acquired target with a previously-selected, specified target is then both a fateful, lethal decision which we should not entrust to machines operating outside responsible human control, and a crucial choke point for stopping out-of-control robotic violence, or for slowing the speed of possible interactions between automated systems. It is thus both a prudent safety and a vital arms control measure to interpose a requirement of human decision and responsible human control whenever the decision is made to attack a new acquired target on any basis whatsoever, including its identification with (as) a previously-selected, specified target or member of a group thereof.

When acting on the basis of identifying particular acquired targets with previously specified targets, groups or classes, such identification must be the responsibility of the human commander who authorizes attacks.

This does mean that existing lock-on-after-launch homing munitions are LAWS. However, if States Parties to the CCW wish to exempt certain existing types of homing munitions from a general ban on LAWS, this can obviously be done by negotiation and by specification of those general and specific types of LAWS that are exempt from the ban.

It must be acknowledged that modern systems include embedded computers which execute programmed sequences and trivial decisions at many levels, which are not of concern when responsible human control is operational. However, automated execution of attacks approved by humans must also be limited:

The system’s execution of attacks triggered by a human operator may entail a programmed process which may include machine decision to sequence events in weapons fire and, within strict limits, to control aiming and timing of fire. Such machine decisions may be used to optimize execution of attacks the operator has triggered against particular acquired targets or specified locations, but may never include the triggering of attacks against any other targets or locations, including targets newly-acquired or re-acquired during the execution of triggered attacks.

Notice that what has been defined here is triggering by a human operator, and we have also discussed how to demonstrate and verify human-triggering. We have not defined autonomous triggering, but there are really only three general possibilities. If a weapon is not triggered by a human operator, it must be triggered by some condition being fulfilled, such as its target being present or behaving in some way. The only other possibility is that the weapon is triggered randomly with respect to its targets and with respect to human and militarily-relevant conditions, in which case it is questionable whether this can even be considered a weapon. Certainly not a useful one.

If we consider the point to be the assertion of human control over weapons and attacks, it is enough to define the criteria of human control, to require them and to consider as “autonomous” any weapons that do not meet these requirements.

If a weapons system is not triggered by a human operator, it will typically be triggered by targets or conditions. Such weapons systems are ‘autonomous.’

The target identification and attack decision processes of autonomous weapons, their “critical functions,” are not directly observable. However, their triggering by humans is observable and can be documented. Compliance with a commitment to accountable human control and triggering by human operators only can be documented with cryptographic authentication for verification while protecting confidential information. This is the possible basis of a verified ban of LAWS. 

Targeting in the Fog of War

The need for qualification of targets as “particular” and “acquired” adds a complication, but is needed in order to nail down the ambiguity of “target selection” and what it means for something to be a target.

An “acquired target” is one whose presence has been detected and is being observed in actual data deriving from the real world via senses and sensors. A target is “particular” if it corresponds to a particular person or object in the real world. Particular targets can be members of groups or classes of targets, but groups or classes are not particular targets.

A ‘particular acquired target’ is a representation of a particular target object or person whose presence in the real world, in real time, is believed to be revealed by signals in sensor data and/or human senses.

We may identify an acquired target with a previously specified target, something we think may be out there, that we are on the lookout for or actively hunting. Specified targets may specified in more or less detail, and their specification might include something about their behavior as well as targeting instructions and rules of engagement.

However, when a target is acquired, its identification with a specified target is a non-trivial decision. It is important to remember that all targets, as known by targeting systems, are only representations and interpretations. Identification of an acquired target with a specified target — drawn from some portfolio of enemy combatants, say — is always subject to possible error. Once an acquired target has been identified, its identification can be maintained on the basis of tracking only as long as acquisition is not lost and there is no possibility of confusing targets.

Framing, Defining and Banning LAWS

Under the approach advocated here, lethal weapons systems would have to be designed to only attack particular acquired targets or specified locations known to the commander authorizing and the operator aiming and triggering the weapon, or else they would be considered LAWS.

This would capture many things we may not want to ban, or to ban with a particular instrument. However, we don’t want things we haven’t thought of to be allowed by default.

Therefore we should agree that all LAWS are prohibited except for some types which we shall name and enumerate as either specifically allowed or not subject to regulation under this treaty. Some of these are obvious, such as defense shields against incoming uninhabited munitions. Others will be more contentious, such as some existing and evolving types of homing and loitering missiles.

Meaningful human control, including triggering by human operators, and accountable human control for verification, are the basis for the approach, but rather than attempting to specify exactly how humans should interact with machines — what information should be displayed and so on — we simply require that the particular acquired targets that the system will attack are already determined when attacks are triggered, and that in triggering attacks the human operator accountably accepts responsibility for authorizing those attacks on those particular acquired targets.

Getting Along with the Russians

Incidentally, the Russian Federation, in its working paper submitted to the GGE, rejects efforts to define international standards for meaningful human control:

Attempts to develop certain universal parameters of the so-called “critical functions” for both existing highly automated war systems and future LAWS — aim identification and hit command, maintaining “significant” human control — can hardly give practical results. For example, it is doubtful whether criteria to determine a due level of “significance” of human control over the machine could be developed.

Russia has a point here: it would certainly be messy to try to set standards for every potential weapon system if we had to do so, although, if we had to do so, I’d say we could. But a simple criterion like prompt triggering, observable as the behavior of the system, cuts through a lot of this.

Lethality

Finally, it may be necessary to define the terms “lethal” and “lethality” for this context, since it is not acceptable to allow autonomous weapons to attack materiel targets, at least with destructive or damaging, kinetic force. The CCW may still want to avoid stirring “cyber” warfare into this pot. But it cannot free robots to fight other robots with live weapons that will inevitably also kill people and escalate any robot clash to a general conflict. Therefore the definition of “lethal” must here be something like:

Lethal weapons systems apply physical force against objects or persons, with effects which may include impediment, harm, damage, destruction or death.

It is most important to remember that this is only an attempt at a working definition of LAWS which can be the basis for an agreement to ban or regulate some or all of them. I do not think it would be practical to ban everything that falls under the definition of lethal autonomous weapon systems. But to avoid a narrow ban that could be circumvented just by changing one detail of a future weapon system so that it fell outside the definition, I would ban all LAWS by presumption and only enumerate exceptions for weapons systems, such as close-in defenses against uninhabited munitions, that we may consider desirable, or for existing systems that we agree, by negotiation, to grandfather in.

 

Landmines that Walk

Greetings from Geneva, where I am attending the first meeting of the Group of Governmental Experts on Lethal Autonomous Weapon Systems (LAWS) of the Convention on Certain Conventional Weapons (CCW). I am not governmental, and you may judge whether I am an expert, but in any case, here I am.

One of the things you hear over and over here is that we need a “working definition” of LAWS, or AWS if you don’t think it would be OK for robots to Taser people at will, or that we could solve war by having killer robots fight each other and just let us know who won.

In our search for a working definition of AWS or just “autonomous weapons,” we might take a cue from the Ottawa Convention, which bans antipersonnel landmines and defines them as follows:

‘Anti-personnel mine’ means a mine designed to be exploded by the presence, proximity or contact of a person and that will incapacitate, injure or kill one or more persons.

In 2005, the Norwegian Ministry of Finance asked its Advisory Council on Ethics to advise on whether two weapon systems then in development by Textron Systems, the Intelligent Munitions System (IMS) and the Spider networked munitions system, would fall under the Ottawa Convention. Both were intended as replacements for traditional landmines; instead of operating solely as isolated munitions, they would be networked and potentially triggerable either by an operator or by a sensor detecting the presence of a target. IMS was never produced, but Spider M7 is currently in use by the US Army—as a “man-in-the-loop” weapon system, without the “battlefield override switch” that would allow it to operate fully autonomously.

In its recommendation, the Advisory Council, first of all, rejected the notion that classifying these systems as something other than landmines would place them outside the Ottawa definition of landmines:

“The Advisory Council finds that all weapons that are designed to explode because of a person’s inadvertent contact, falls within the definition of an antipersonnel mine…. The point with such weapons is to be able to engage the enemy without being in active combat with them or even being present in the area. The mine is activated by the victim, not by the person that emplaced it.”

Second, the Council found that

“…if the weapons systems in question are going to be equipped with “battlefield override” features, or in other ways designed to circumvent the “man-in-the-loop” feature, they will fall within the scope of the prohibition in the Convention.”

Given that the Ottawa Convention defines mines as things that explode, it clearly cannot be considered to cover autonomous weapons in general, but mines can be considered a subcategory of autonomous weapons. The Convention’s definition of landmines could therefore be generalized. Here is one possible wording:

‘Autonomous weapon’ means a weapon system designed to be triggered by the presence of an object or person or by fulfillment of specified conditions, rather than by a human operator.

Such a framing would require an ancillary definition of “trigger,” but this could be as simple as a time limit. A system that commences violent action within, say, 7 seconds of a human command might be considered to be operator-triggered. An operator could thus be granted a few seconds to rescind an inadvertent or erroneous trigger. However, if an operator gives a command that the system should act at some unspecified future time when a target is detected or located or behaves in a certain way, then it is not triggered by the operator.

The Council’s rejection of the argument that the networked nature of Spider and IMS would make them not landmines stands as a rebuttal to those who argue that networked systems with embedded AI would not be recognizable as autonomous weapons. Regardless of a system’s architecture or geometry, what matters is whether its violent action is triggered by the action of a human operator, or by some other immediate cause.

Likewise, the Council’s finding that Spider and IMS would be exempt from the Convention if it can only be triggered by a human operator and not by the victim, but not if it had an optional autonomous mode, stands as a rebuke to the practice of human-washing autonomous weapons by adding “man-in-the-loop” capabilities while retaining autonomous capabilities.

Starting with the framing of landmines as “victim-activated” and generalizing it to autonomous weapons as “target or condition-triggered” need not mean that everything that meets this definition must be banned. An autonomous weapons convention can provide exemptions for purely defensive anti-munition systems, for some kinds of lock-on-after-launch homing munitions, and for other systems that nations may desire to retain, subject to negotiation in the CCW or another international forum.

Most definitions of autonomous weapons that have been offered reference abstract and unobservable things, such as functions, decisions, and selection of targets. However, arms control usually addresses things that are concrete and and observable. I believe that the definition offered here meets that criterion.

New British killer robot policy II: mandatory two-way data links?

The new UK Joint Doctrine Publication (JDP) on Unmanned Aircraft Systems has been covered in The Verge and other media under headlines like “UK government says humans will always be in charge of its robot weapon systems.”

So here we go, what does “in charge of” mean? Actually this new document repeatedly insists on the precision of its definitions and its own authenticity as a reflection of actual UK policy. So what is it saying so precisely?

The JDP doubles down on the position of its predecessor document, the 2011 Joint Doctrine Note (JDN) now marked on every page with loud announcements of its expired status, that “autonomous” must be distinguished from “automatic” or “automated” and that an “autonomous system” means one “capable of understanding higher level intent and direction.”

UKdocDefining “fully autonomous weapon” as something “with the ability to understand,” and with capabilities that stem from this “understanding,” places autonomous weapons firmly in the realm of artificial intelligence of a kind that so far exists only in science fiction. This makes it easy for the JDP to declare that

“The UK does not possess fully autonomous weapon systems and has no intention of developing them.”

However, while defining autonomy in terms of “understanding,” the JDP does not clarify what “understanding” means, other than that it provides a capability:

“From this understanding and its perception of its environment, such a system is able to take appropriate action to bring about a desired state. It is capable of deciding a course of action, from a number of alternatives, without depending on human oversight and control, although these may still be present.”

We may question what level of artificial intelligence is required in order to begin to provide capabilities that might be described in such language. In fact, existing and emerging missile systems, for example, decide a course of action from a number of alternatives without depending on human oversight and control, which are assumed not to be present. But there is clearly much further to go in AI before equaling or exceeding human capabilities for situational awareness, comprehension and direction of appropriate action in relation to a desired state.

The predecessor document, the 2011 Joint Doctrine Note (JDN), noted that

“…proportionality and distinction would be particularly problematic, as both of these areas are likely to contain elements of ambiguity requiring sophisticated judgement. Such problems are particularly difficult for a machine to solve and would likely require some form of artificial intelligence to be successful. Estimates of when artificial intelligence will be achieved (as opposed to complex and clever automated systems) vary, but the consensus seems to lie between more than 5 years and less than 15 years, with some outliers far later than this.”

Artificial intelligence seems here to have been reified as something human-like at least. If it were as little as “5 years” away (which would be a year ago), one might think it would be something to be concerned about today. However the JDN stated that “Autonomous systems will, in effect, be self-aware” and

“their response to inputs indistinguishable from, or even superior to, that of a manned aircraft. As such, they must be capable of achieving the same level of situational understanding as a human.

The point of setting such a high bar for true autonomy was that it excluded everything anyone is actually doing or planning to do:

“This level of technology is not yet achievable and so, by the definition of autonomy in this JDN, none of the currently fielded or in-development unmanned aircraft platforms can be correctly described as autonomous.”

The new JDP drops the words “self-aware” and “same level of situational understanding as a human” but retains the argument that the “technology is not yet achievable,” so nothing the MoD is doing involves autonomous weapons, case closed:

“Fully autonomous weapons systems as we describe them (machines with the ability to understand higher-level intent, being capable of deciding a course of action without depending on human oversight and control) currently do not exist and are unlikely in the near future.”

Rather, the JDP asserts, on the premise that automation is not autonomy, that

“While some companies and research organisations are trying to develop autonomous systems, the UK’s view is that increasing automation, not autonomy, is required to improve capability.”

Yet it also describes how “automation” evolves into “autonomy”:

“For example, a mission may require a remotely piloted aircraft to carry out surveillance or monitoring of a given area, looking for a particular target type, before reporting contacts to a supervisor when found. A human-authorised subsequent attack would be no different to that by a manned aircraft and would be fully compliant with the LOAC, provided the human believed that, based on the information available, the attack met LOAC requirements and extant rules of engagement.”

In text that was cut from the new version, the JDN continued:

“From this position, it would be only a small technical step to enable an unmanned aircraft to fire a weapon based solely on its own sensors, or shared information, and without recourse to higher, human authority. Provided it could be shown that the controlling system appropriately assessed the LOAC principles (military necessity;humanity; distinction and proportionality) and that ROE were satisfied, this would be entirely legal.”

Here the JDN sought to defend the legality of what would obviously be a fully autonomous hunt-identify-kill mission, with machines even doing LOAC assessments. The new JDP drops this defiant stance and limits the scenario to human-authorized but possibly machine-directed attack, calling this “greater assistance to pilots and operators, and in-system survivability in non-permissive, contested and congested battlespace.”

The JDP concludes that

“the UK does not possess armed autonomous aircraft systems and it has no intention to develop them. The UK Government’s policy is clear that the operation of UK weapons will always be under human control as an absolute guarantee of human oversight, authority and accountability. Whilst weapon systems may operate in automatic modes there is always a person involved in setting appropriate parameters.”

However, it has defined “autonomous” as something that is beyond present technology, that nobody is doing. So its disavowal of autonomous weapons places no restrictions and has no effect on anything the UK is doing or plans to do in “autonomous weapons” as the rest of the world understands the term.

We may be encouraged to know that “there is always a person involved in setting appropriate parameters” but this does not set a high standard for human control.

Nevertheless, the JDP contains some statements which may be encouraging if taken out of context, such as that

“the UK opposes the development of armed autonomous systems.”

and

“The UK does not own, and has no intention of developing, autonomous weapon systems as it wants commanders and politicians to act as the decision makers and to retain responsibility.”

It acknowledges the problem of predictability, and that the acceptance of responsibility by human operators and commanders for using an autonomous weapon system

“has an implicit assumption that a system will continue to behave in a predictable manner after commands are issued; clearly this becomes problematical as systems become more complex and operate for extended periods…. In reality, predictability is likely to be inversely proportional to mission and environmental complexity.”

From this candid observation, the JDP reaches a remarkable conclusion:

“For long-endurance missions engaged in complex scenarios, the authorised entity that holds legal responsibility will be required to exercise some level of supervision throughout. If so, this implies that any fielded system employing weapons will have to maintain a two-way data link between the aircraft and its controlling authority.”

If, as the document repeatedly insists, this represents actual UK policy, the conclusion that maintenance of “a two-way data link” is required for “long-endurance missions engaged in complex scenarios” may come to be of importance, as it would be one concrete standard against which the legality of systems and their use could be judged. However, a footnote weakens the pledge by adding that

“this link may not need to be continuous.”

New British Policy on Killer Robots?

The Guardian reports that the UK will soon issue “a new doctrine designed to calm concerns about the development of killer robots.” While it is beyond doubt that any such “new doctrine” will be designed to calm our concerns, based on the Guardian report it is unclear that anything new has been promised.

The US and UK have been playing the same basic game: posit fully autonomous weapons as a Kantian ideal of human-like intelligence and wanton killing for reasons of their own — something no one ever wanted — and disavow any intention of ever creating such monsters, far beyond current technology anyway. Meanwhile, plow ahead on every front of the robot arms race under the assurance that they are not creating the mythical out-of-control killer robots.

According to the Guardian, the MOD has pledged that “UK policy is that the operation of weapons will always be under control as an absolute guarantee of human oversight, authority and accountability.The UK does not possess fully autonomous weapon systems and has no intention of developing them.”

While placing this in quotes, The Guardian does not explicitly say who it is quoting. The closest match seems to be a letter to Richard Moyes of Article 36 from Paul Larkin of the Counter Proliferation and Arms Control Centre.

That letter reiterates the view that “The UK defines such systems as those capable of understanding higher level intent and direction allied to a sophisticated perception of its environment…” in other words, defining AWS as systems with approximately human-level intelligence, systems everyone will agree don’t exist yet, if ever.

The letter also reiterates that, even with AWS defined as far-off, human-level AI-driven killer robots, “The UK does not support a pre-emptive ban on such systems.” It argues that a ban would “stifle research” into the “legitimate non-lethal advantages to increasingly autonomous technology in the future.” It views “existing International Humanitarian Law [as] sufficient to control and regulate Lethal Autonomous Weapons Systems.”

In any event, if we take the Guardian’s unattributed quote as a statement of policy, there is no indication that it deviates from previous British policy, which like the essentially equivalent American policy, draws attention to mythical killer robots that nobody wants and are beyond current technology, while casting a veil over everything the military is actually doing or might potentially want to do, calling it merely “automatic” or “semi-autonomous” and not of concern.

The Babble out of Buffalo

Tero Karppi, Assistant Professor of Media Studies, SUNY Buffalo

I don’t know why “Killer Robots as cultural techniques,” a recent paper in the International Journal of Cultural Studies, got so much attention, other than that the University put out a press release and it was picked up by Phys.org, the Daily Mail, NewsMax and other outlets, generally with the spin put on it by the press release, that “it’s too late” to stop killer robots, according to “researchers.” I have now seen the paper, and I’m not impressed.

The authors engage in a series of trifling, superficial, pseudo-intellectual observations dressed up in babble and references to the works of other academics you’ve never heard of, plus Marx and Foucault. They affect the superior stance of observers uninvolved in political struggle and indifferent to its outcome, but very concerned that the Campaign to Stop Killer Robots is making “epistemological assumptions or claims” about “distinctions between human and machine” that the enlightened authors can see right through:

The campaign’s distinction between human and machine constructs and utilizes historical imaginaries brought to us by the increase of representations of autonomous automation and the cultural techniques that precede these representations . In this way, the human–machine distinction becomes a step in a recursive process through the embeddedness of cultural imaginaries and representations that form the epistemological foundation of the campaign’s category distinction.

I might be old-fashioned, but I still believe distinguishing between humans and machines is useful and meaningful. And what I really care about is not epistemological correctness but stopping this crazy arms race before it leads to nuclear war.The authors have nothing to say about that, of course. They are just keen to criticize the Campaign and position themselves as more cutting-edge, knowing and hipsterish.

In fact, the article frames itself at the outset as a critique of the Campaign, which is cited no less than 25 times in the text. This highlights Mary Wareham’s question in ZDnet: “We’re not sure why the authors of this report chose not contact the Campaign to Stop Killer Robots as they prepared it, but rather draw from our website.” Maybe because they’d found what they needed to dash off a Culture Studies paper and didn’t want anything to complicate their straw-man arguments.

The authors summarize in the last line of the paper:

Thus, while Killer Robots are an obvious manifestation of a historical imaginary of the 21st century, where automation has the potential to become universally destructive for humanity, they are also products of particular cultural techniques which ‘participate in the formation of subjects, as well as constitute ways of knowing and organizing social reality’ (Parikka, 2015: 30).

What I want to shout at them is, YEAH? SO WHAT? DO YOU SUPPORT THE BAN OR NOT? One might infer (weakly) that they do, unless they prefer universal destruction of humanity. However, it seems that this is of at best secondary concern to them. In the real world, the way they posture and the way it has been reflected in media reports serves only to undermine and weaken the call for meaningful and effective arms control.