Skip to main content current edition: International edition The Guardian - Back to home Become a supporter Subscribe Find a job Jobs Sign in Search Show More Close with google sign in become a supporter subscribe search find a job dating more from the guardian: change edition: edition International edition The Guardian - Back to home browse all sections close Facebook Facebook strategist rejects PM's claim over extremist material Counter-terrorism expert says that, contrary to Theresa May’s assertion, technology companies are treating the problem of terrorist content seriously Artificial intelligence programs are being created to identify extremist material online. extremist material online. Photograph: Lauren Hurley/PA Facebook Facebook strategist rejects PM's claim over extremist material Counter-terrorism expert says that, contrary to Theresa May’s assertion, technology companies are treating the problem of terrorist content seriously Ian Cobain Thu 21 Sep ‘17 18. 49 BST Last modified on Mon 27 Nov ‘17 16. 37 GMT This article is 3 months old Facebook’s senior counter-terrorism strategist has dismissed Theresa May’s demand that the company should go “further and faster” to remove material created by terrorists and their supporters, describing the claim that it does not do enough as unhelpful. Artificial intelligence programs are being created to identify such material, and hundreds of people are employed to search for content that should be removed, said Brian Fishman, who manages the company’s global counter-terrorism policy. In response to a question about May’s assertions that big internet companies provide a safe space for terrorism, Fishman said: “Facebook’s policy on this is really clear. Terrorists are not allowed to be on Facebook. So I don’t think the suggestion that technology companies must be compelled to care is helpful at this stage. ” On Wednesday, May told the United Nations general assembly that she believed tech firms needed to develop the capacity to take down terrorist-related material in two hours. In an interview published a few hours later by the CTC Sentinel, the journal of the Combating Terrorism Center at the US Military Academy at West Point, Fishman insisted that companies such as his were already putting great effort into this work. “It’s clear technology companies across the industry are treating the problem of terrorist content online seriously,” he said. “We currently have more than 4,500 people working in community operations teams around the world reviewing all types of content flagged by users for potential terrorism signals, and we announced several months ago that we are expanding these teams by 3,000. ” Of these, 150 focus almost entirely on terrorist-related material. “We are increasingly using automated techniques to find this stuff. We’re trying to enable computers to do what they’re good at: look at lots of material very quickly, give us a high-level overview. We’ve also recently started to use artificial intelligence,” Fishman said. However, the use of human assessors remains critical as computers cannot comprehend the nuanced context of some material, such as some online messages intended to counter terrorist propaganda. “Making sure that we can understand really culturally nuanced activity in a way that is consistent is a constant challenge,” he said. “And it’s something that requires human beings. We really want, as much as possible, to rely on our ability to use algorithms and machine-learning to do as much of this as possible. But we’re never going to get away from the necessity of having human beings to make the grey area calls. ” However, Fishman acknowledged that it was difficult to be sure what percentage of terrorist-related content was being identified and taken down. Responding to the call from the home secretary, Amber Rudd, for backdoor access to encrypted messaging applications – such as Facebook-owned WhatsApp – he said changing the rules might be counterproductive. “Because of the way end-to-end encryption works, we can’t read the contents of individual encrypted messages on, say, WhatsApp, but we do respond quickly to appropriate and legal law enforcement requests. We believe that actually puts authorities in a better position than in a situation where this type of technology runs off to mom-and-pop apps scattered all over the globe. ” However, Fishman made clear that some WhatsApp metadata – information about communications data – was handed over to police bodies or intelligence agencies. “We do respond quickly to appropriate and legal law enforcement requests,” he said. Asked whether metadata is shared following such requests, he said: “There is some limited data that’s available, and WhatsApp is working to help law enforcement understand how it responds to their requests, especially in emergency situations. ” Fishman said Facebook was also working with “civil society groups on the ground” in the UK, Germany and France, offering training and advert credits to make their messaging more effective. Topics more on this story How Facebook allows users to post footage of children being bullied Leaked guidelines on cruel and abusive posts also show how company judges who ‘deserves our protection’ and who doesn’t Published: 22 May 2017 How Facebook allows users to post footage of children being bullied 'No grey areas': experts urge Facebook to change moderation policies Labour’s Yvette Cooper is among those calling for more transparency from the company in wake of Guardian revelations Published: 22 May 2017 'No grey areas': experts urge Facebook to change moderation policies Facebook flooded with 'sextortion' and revenge porn, files reveal Leaked documents show site struggles with with mammoth task of policing content ranging from nudity to sex abuse Published: 22 May 2017 Facebook flooded with 'sextortion' and revenge porn, files reveal Revealed: Facebook's internal rulebook on sex, terrorism and violence Leaked policies guiding moderators on what content to allow are likely to fuel debate about social media giant’s ethics Published: 21 May 2017 Revealed: Facebook's internal rulebook on sex, terrorism and violence + How social media filter bubbles and algorithms influence the election Published: 22 May 2017 How social media filter bubbles and algorithms influence the election + Facebook will let users livestream self-harm, leaked documents show Published: 21 May 2017 Facebook will let users livestream self-harm, leaked documents show + The Facebook Files: sex, violence and hate speech – video explainer Published: 21 May 2017 The Facebook Files: sex, violence and hate speech – video explainer most viewed The Guardian back to top all sections close back to top All rights reserved.