Skip to main content current edition: International edition The Guardian - Back to home Become a supporter Subscribe Find a job Jobs Sign in Search Show More Close with google sign in become a supporter subscribe search find a job dating more from the guardian: change edition: edition International edition The Guardian - Back to home browse all sections close Artificial intelligence (AI) AlphaGo: beating humans is one thing but to really succeed AI must work with them Google DeepMind’s success is significant, but artificial intelligence practitioners must teach the public there’s more to AI than trying to replace them Thinking Robot understanding co-operation and team communication with humans? ’ Photograph: Blutgruppe/Corbis Artificial intelligence (AI) AlphaGo: beating humans is one thing but to really succeed AI must work with them Google DeepMind’s success is significant, but artificial intelligence practitioners must teach the public there’s more to AI than trying to replace them Michael Cook Tue 15 Mar ‘16 14. 00 GMT Last modified on Tue 21 Feb ‘17 17. 31 GMT “Really, the only game left after chess is Go,” was how Demis Hassabis set the scene ahead of AlphaGo’s match with world champion Lee Sedol earlier this month. Either Hassabis’s copy of the latest Street Fighter didn’t get delivered on time, or he was trying to be a little poetic to mark the occasion. Either way, you’d be forgiven for thinking there really were no games left to conquer after the media reaction to AlphaGo winning the first three games in a best-of-five against its human opponent. It’s been a curious month to be an AI researcher. AlphaGo seals 4-1 victory over Go grandmaster Lee Sedol Read more Watching the contest, which AlphaGo eventually won 4-1, I’ve learned a lot about Go and one of the most interesting things is how the spaces left empty on the board can often be as important and meaningful as the spaces where stones are played. The history of AI is similarly defined as much by the problems we’ve sidestepped or left out as the ones we’ve pushed on with to completion. There is still a lot of space left to secure, and even more space that we’ve simply never even looked at. One of the lesser-played corners of that board is the subfield of AI called computational creativity. For the last five years, I’ve worked on a system called Angelina, which designs simple videogames on its own (including some based on Guardian stories). This field recently had its own AlphaGo moment of sorts, as the European What-If Machine project helped generate the premise behind a West End musical, Beyond The Fence. We’re building software that can engage with people creatively, or as we sometimes put it, to exhibit behaviours that observers would describe as creative. So our field is defined in terms of external forces – someone or something else needs to validate our work as creative, we can’t simply beat our opponent into submission and declare ourselves more artistic. Lee Sedol Facebook Twitter Pinterest The world’s top human Go player, Lee Sedol, reviews the fourth match of the Google DeepMind challenge match. Photograph: Reuters We have to treat AI like cold, hard science, but we are also compelled to engage with it as a shared social concept. AI is not just the algorithms and the data, the models and the results. It’s our collective understanding, as a society, of the things technology can do, the things it can’t yet do, and then AI – the stuff that happens where those things meet. This, for me, is the significance of events like the AlphaGo/Lee matchup. Not the slaying of a white whale for AI researchers, but the impact it has on how the public understand what AI is, and what it it is for. Killer robots and digital doctors: how can we protect society from AI? Read more It’s easy to think about AI as simply being a case of being better than humans at things. If you were born in an era before Siri, your first encounter with AI was probably an enemy in a videogame, where its purpose was usually to try and stop you from winning. Complaints about game AI are almost always a request for better AI – we want more, stronger, faster, cleverer, more surprising, more ruthless, more effective. We want to be beaten, challenged, pushed. That’s the narrative. Like AI’s applications in the real world – predicting, classifying, solving – the worth of an AI is generally measured by how much better it is at its task than a human. Is it any wonder, as a result, that the general public worries about a future in which it replaces us in our jobs, or perhaps simply wipes us out entirely? While it’s easy to dismiss talk of apocalypses and doom as hype, it’s important to understand the implications the public perception of AI has on society. As we place these new systems on ever-higher pedestals, we risk losing sight of the guiding hand of the humans. AI is not born in a vacuum. AlphaGo did not will itself into being. Systems are developed by humans, and they inherit human flaws with them – a fact AI practitioners are often in denial about. When those humans are primarily white, male, middle-class computer scientists then that causes further problems. Right now it’s innocuous slip-ups, like not noticing that your selfie analyser is regurgitating your data’s white, western standards of beauty. Soon these systems will be deciding who gets health coverage; who gets parole; who gets their preferred schools; who gets fired. The discourse we are having now about projects like AlphaGo affect how people view modern AI, what people believe AI is for, and ultimately influence how people will invest in and apply it in the future. The NHS is a much bigger challenge for DeepMind than Go Read more When Hassabis describes Go as “the only game left”, I feel in amid the poetic licence is also a lot of presumption about what challenges we as AI researchers choose to go after. DeepMind tells us that StarCraft is their next target – another game mostly about going head-to-head against a human. There are plenty of AI grand challenges out there, however, that we don’t give as much thought to. Rather than Starcraft, could an AI learn to play a team eSport like League of Legends, understanding cooperation and team communication with humans? Could AI learn to be a playmate in a game like Minecraft, and improvise in crafting and co-creation? Can we teach a computer to be as realistically fallible and prone to tricks as your best friend is? Can we look to the unexplored parts of artificial intelligence, the empty spaces on the far side of the board, and think more broadly about what AI can do and be? Facebook Twitter Pinterest AlphaGo computer beats Go champion – video I’m truly delighted that AlphaGo has managed to beat a world champion at Go – it’s a milestone that has been in the minds of researchers since before the field even existed. But I think we can and should be more ambitious about what we want from AI in order to broaden how the public thinks about technology. It is our duty as practitioners to be responsible in the narrative we set, in order to mediate that strange space between what people understand technology to be capable of, and what is thought to be impossible. As we marshall Go across that gap from impossible to possible, it’s time to look at what’s next. I hope we can make a good choice. Michael Cook is an AI researcher at Goldsmiths, University of Falmouth Topics Loading comments… Trouble loading? more on this story World's best Go player flummoxed by Google’s ‘godlike’ AlphaGo AI Ke Jie, who once boasted he would never be beaten by a computer at the ancient Chinese game, said he had ‘horrible experience’ Published: 23 May 2017 World's best Go player flummoxed by Google’s ‘godlike’ AlphaGo AI Artificial intelligence 'judge' developed by UCL computer scientists Software program can weigh up legal evidence and moral questions of right and wrong to predict the outcome of trials Published: 24 Oct 2016 Artificial intelligence 'judge' developed by UCL computer scientists Stephen Hawking: AI will be 'either best or worst thing' for humanity Professor praises creation of Cambridge University institute to study future of artificial intelligence Published: 19 Oct 2016 Stephen Hawking: AI will be 'either best or worst thing' for humanity Google creates AI program that uses reasoning to navigate the London tube Combining external memory and deep learning, DeepMind’s program learns how to do tasks independently, and could pave the way for sophisticated AI assistants Published: 12 Oct 2016 Google creates AI program that uses reasoning to navigate the London tube + Machine logic: our lives are ruled by big tech's 'decisions by data' Julia Powles in Berlin Published: 8 Oct 2016 Machine logic: our lives are ruled by big tech's 'decisions by data' + James Lovelock: ‘Before the end of this century, robots will have taken over’ Published: 30 Sep 2016 James Lovelock: ‘Before the end of this century, robots will have taken over’ + How can we address real concerns over artificial intelligence? Harry Armstrong and Jared Robert Keller Published: 15 Sep 2016 How can we address real concerns over artificial intelligence? + Artificial intelligence: ‘We’re like children playing with a bomb’ Published: 12 Jun 2016 Artificial intelligence: ‘We’re like children playing with a bomb’ most viewed The Guardian back to top all sections close back to top All rights reserved.