Search Search Artificial intelligence can secretly be trained to behave 'maliciously' and cause accidents Visitors look at the humanoid robot Roboy at the exhibition 'Robots on Tour' in Zurich, March 9, 2013 / Reuters 'BadNets are stealthy, i. e. , they escape standard validation testing' 6169789578 Click to follow The Independent Tech Neural networks can be secretly trained to misbehave, according to a new research paper. A team of New York University scientists has found that people can corrupt artificial intelligence systems by tampering with their training data, and such malicious amendments can be difficult to detect. This method of attack could even be used to cause real-world accidents. -- , they escape standard validation testing, and do not introduce any structural changes to the baseline honestly trained networks, even though they implement more complex functionality,” says the paper. How artificial intelligence conquered democracy It’s a worrying thought, and the researchers hope their findings lead to the improvement of security practices. “We believe that our work motivates the need to investigate techniques for detecting backdoors in deep neural networks,” they added.