Skip to main content current edition: International edition The Guardian - Back to home Become a supporter Subscribe Find a job Jobs Sign in Search Show More Close with google Artificial intelligence (AI) Artificial intelligence and nanotechnology 'threaten civilisation' Technologies join nuclear war, ecological catastrophe, super-volcanoes and asteroid impacts in Global Challenges Foundation’s risk report Empathetic robot Pepper isn't a threat to humanity, but more advanced AI in the future could be, claims a new report. advanced AI in the future could be, claims a new report. Photograph: Koji Sasahara/AP Artificial intelligence (AI) Artificial intelligence and nanotechnology 'threaten civilisation' Technologies join nuclear war, ecological catastrophe, super-volcanoes and asteroid impacts in Global Challenges Foundation’s risk report Stuart Dredge @stuartdredge Wed 18 Feb ‘15 10. 11 GMT Last modified on Tue 21 Feb ‘17 18. 11 GMT This article is 2 years old Artificial intelligence and nanotechnology have been named alongside nuclear war, ecological catastrophe and super-volcano eruptions as “risks that threaten human civilisation” in a report by the Global Challenges Foundation. In the case of AI, the report suggests that future machines and software with “human-level intelligence” could create new, dangerous challenges for humanity – although they could also help to combat many of the other risks cited in the report. “Such extreme intelligences could not easily be controlled (either by the groups creating them, or by some international regulatory regime), and would probably act to boost their own intelligence and acquire maximal resources for almost all initial AI motivations,” suggest authors Dennis Pamlin and Stuart Armstrong. Artificial intelligence: can scientists stop ‘negative’ outcomes? Read more “And if these motivations do not detail the survival and value of humanity, the intelligence will be driven to construct a world without humans. -- That is why its report presents worst-case scenarios for its 12 chosen risks, albeit alongside suggestions for avoiding them and acknowledgements of the positive potential for the technologies involved. In the case of artificial intelligence, though, Global Challenges Foundation’s report is part of a wider debate about possible risks as AI gets more powerful in the future. In January, former Microsoft boss Bill Gates said that he is “in the camp that is concerned about super intelligence”, even if in the short term, machines doing more jobs for humans should be a positive trend if managed well. -- I agree with Elon Musk and some others on this and don’t understand why some people are not concerned. ” Tesla and SpaceX boss Musk had spoken out in October 2014, suggesting that “we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that”. Professor Stephen Hawking is another worrier, saying in December that “the primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race. ” The full list of “risks that threaten human civilisation, according to Global Challenges Foundation: Topics Loading comments… Trouble loading?