Every discussion of robots and warfare will always come back to one, or both, of two science fiction touchstones: Skynet and Asimov. “Skynet”, the artificial intelligence defence system described in the Terminator films, gains self-awareness and immediately attempts to wipe out humanity. In Isaac Asimov’s robot stories, he imagines “three laws of robotics”, the first of which instructed all robots: “A robot may not injure a human being or, through inaction, allow a human being to come to harm”. -- The concern is that such weapons, divorced from the human decision-making process, will make killing that much more of an automated process: press a button, and some number of hours later, someone in a distant country will explode, perhaps while surrounded by civilians. “What we are talking about, however, is fully automated machines that can select targets and kill them without any human intervention,” Noel Sharkey, a professor of artificial intelligence and one of the founders of the Campaign to Stop Killer Robots, told the Telegraph’s Harriet Alexander last year. “And that is something we should all be very worried about. -- Robot soldiers, as The Economist pointed out in 2012 in a discussion of the ethics of automated warfare, will not carry out revenge attacks on civilians, or rape people, or panic in the heat of battle and shoot their allies. When artificial intelligence systems are better, they might be able to distinguish more quickly and reliably between threats and civilians, and to therefore reduce collateral damage. A similar argument is being had over the morality, and legality, of robot cars: although they will probably save lives, because they will react faster to avoid crashes and not drive recklessly, when they do go wrong there will not be someone obviously at fault.