Prolific freelance journalist Joshua Foust seems to be aiming to establish himself as the most publicly visible advocate of Terminators, or as serious people call them, lethal autonomous robots. As a measure of his success so far, a one-pager Foust posted Oct. 8 on DefenseOne.com was reposted by National Journal and soon noted or copied by dozens more sites. Tweeter @KedarPavgi reported that Foust’s post had “spawned a gazillion Terminator references on Twitter”, to which @joshuafoust replied, “Ugh. I go out of my way to counter terminator talk on this topic.”
Now, Foust’s original post reads like a pile of scraps he may have found while cleaning out his laptop one random Tuesday. It informs us that “the idea of the U.S. military deploying a lethal autonomous robot, or LAR, is sparking controversy,” which I gather is still news to some people. But Foust doesn’t seriously argue a point of view, as he has in several previous essays. Mostly he meanders through various comments made by others, which together add up to… various comments.
A meme is born. But Foust does make one point, one that, as far as I can tell, he first rolled out in a debate with ICRAC’s Heather Roff on Sept. 23, one that does represent a genuine novelty in the “controversy” and that may be destined to become a persistent meme (canard, for those less fond of pseudoscientific neologisms). Quartz picked up on it when reposting the original… um… article under the title “The most secure drones will be able to kill without human controllers [emphasis added].”
So, how would that peculiar capability make a drone “most secure”? As Foust explains it, “It may be that the only way to make a drone truly secure is to allow it to make its own decisions without a human controller: if it receives no outside commands, then it cannot be hacked (at least as easily).”
Eureka! The best way to make sure a lethal system can’t be hacked is to give it a mind of its own and no port through which it could be forcibly controlled! Would-be hackers would then be reduced to cajoling, bribing, shaming, pleading and attempting to reason with killer robots! So would the rest of us! Why didn’t anyone think of that before?
Foust’s echo chamber was soon proclaiming that “The military’s push for autonomous drones is about hackers.” Geoffrey Ingersoll at Business Insider demonstrated his infection by grasp of the meme when he explained that “one of the drone war’s biggest weaknesses is against hacking.” Trouble is, neither he nor Foust has been able to provide substantial evidence that this is true.
Really, I’m not spoofing here. As if to underscore the absence of evidence, Ingersoll cited a Washington Post story which he said reported that “government operators were extremely concerned about their robots getting hijacked by operators on the ground using off-the-shelf technology.” However, Barton Gellman’s story in the Post said nothing of the kind; it only cited one Air Force report about possible “countermeasures” to drone attacks, plus the demonstration last year that it is possible to “spoof” the unencrypted civilian GPS signals and use that to fool a drone into thinking it is going where it’s supposed to be going – if you happen to know where that is, and if the drone happens to be a civilian drone that uses the unencrypted GPS signals, and if it relies solely on those for its navigation.
Note that GPS spoofing – essentially the creation of fake signals that overpower the real ones coming from satellites – is not the same as hacking into a drone’s computers. The most you can do this way is to fool the drone’s navigation, not fire a weapon. It’s kind of like, if the drone is looking to stop at McDonald’s, you put up some fake golden arches. This seems as likely to work if the drone is autonomous as if it is remote-controlled.
In fact, in order for it to work, the drone has to be autonomous, or you have to make it autonomous by jamming or otherwise breaking the link to the legitimate operator. If you are trying to steer the drone, you need to know where the drone is trying to go, under some programmed flight plan. And again, this only works if the drone is relying on the civilian GPS signals and has no backup inertial or other sensor-based guidance system.
For drones using the encrypted military GPS channels, it is essentially impossible for anyone to spoof the GPS if they are not privy to inside information about the constantly-changing codes used, as well as the use of very sophisticated equipment. Simple jamming with noise would be easier, but that would only cause the loss of valid GPS signals. A military-grade drone should then revert to inertial guidance and head home.
The Christian Science Monitor did report in December 2011 that an unnamed Iranian engineer claimed that Iran had used GPS spoofing to capture an American RQ-170 stealth reconnaissance drone. However, few experts familiar with this technology believed the claim; a much more straightforward explanation of Iran’s possession of the drone is that it malfunctioned and glided to a rough landing somewhere in Iran.
Foust cites an unnamed “aerospace engineer” who claimed that “with some inexpensive equipment he could hack into a drone and hijack it.” I am guessing that may be the same University of Texas professor who demonstrated the (unencrypted) GPS spoofing to the DHS, FAA, and a splash of media, following the Iranian claim. It is worth noting that Professor Humphreys is also founder of Coherent Navigation, a company specializing in “GPS security” and “defense against a portable civil GPS spoofer.”
Note the word “civil” there. The military already has anti-spoofing technology. Humphreys has a reasonable case to make that the civilian world needs it too, at least before we fill the skies and highways with GPS-reliant drones and vehicles.
Drones just waiting to be hacked. Foust, too, has something to sell us, with such brazen pitches as “Drones have been hackable for years. In 2009, defense officials told reporters that Iranian-backed militias used $26 of off-the-shelf software to intercept the video feeds of drones flying over Iraq.” But that was possible, as was also widely reported at the time, only because the video feeds were broadcast in an unencrypted form for the use of ground troops. The military has since moved to encrypt those signals, but this requires upgrading the soldiers’ portable receivers. In any case, taking advantage of free TV broadcasts may be naughty, but it is not hacking, even if many of the headlines may have sensationalized it as such.
Foust also reminds us that, “in 2011, it was reported that a virus had infected some drone control systems at Creech Air Force Base in Nevada, leading to security concerns….” But malware is everywhere these days, and the “drone control systems” in question include some commercial computers which may have been incidentally infected (who knows what drone operators watch between filming their own “war porn”). There was no report that the systems had been deliberately compromised in a way that would allow an unauthorized person to gain control of them or of the drones they were linked to.
Is hacking to gain control of drones and other robotic weapons a hypothetical possibility? Sure, but the lack of any actual examples to date indicates that the threat is not as great as some of the breathless headlines may suggest.
Hacking in general is only possible because you left a digital door open somewhere; people can walk through open doors to steal guns and grenades, too. There is no “cyberweapon” with enough firepower to break through a half-millimeter air gap, notwithstanding the idiotic vulnerability of operating systems which autorun USB devices when you plug them in. Given time and bitter experience, the military and eventually the civilian world will eliminate such welcome mats for hackers and Stuxnet warriors, and the cyberworld will become a safer and friendlier place.
Autonomy needs another problem to solve. The communications links of remote-controlled weapons are potentially vulnerable, but secure encryption can make them very hard to hack or spoof, and spread-spectrum radio techniques, optical links, and other hardening measures can make them hard to jam or break as well. This is not a good argument for lethal autonomy.
Besides, it is very unlikely, in the near term at least, that the military would deploy systems which are fully self-directed, and dedicated to lethal autonomy in the sense that they don’t even have any links or ports through which they could potentially be controlled or reprogrammed – whether by our own chain of command or by some hypothetical, incredibly capable adversary with knowledge of how to do so.
Giving weapon systems autonomous capabilities is a good way to lose control of them, either due to a programming error, unanticipated circumstances, malfunction, or hack, and then not be able to regain control short of blowing them up, hopefully before they’ve blown up too many other things and people.
Autonomous targeting and fire control capabilities give weapon systems the capability to kill on their own. Whether they will continue to take orders after they have gone rogue then becomes an open question, which was never the case for systems which by design are incapable of doing anything without a positive command.
In short, Foust’s new meme is inverted, infectious, and makes no sense.
UPDATE: This post provoked a series of furious responses from Foust, which are summarized, along with a condensed version of the argument, here on Medium.