AlphaGo: beating humans is one thing but to really succeed AI must work with them

Google DeepMind’s success is significant, but artificial intelligence practitioners must teach the public there’s more to AI than trying to replace them

Thinking Robot
‘Could AI learn to play a team eSport like League of Legends, understanding co-operation and team communication with humans?’ Photograph: Blutgruppe/Corbis

AlphaGo: beating humans is one thing but to really succeed AI must work with them

Google DeepMind’s success is significant, but artificial intelligence practitioners must teach the public there’s more to AI than trying to replace them

“Really, the only game left after chess is Go,” was how Demis Hassabis set the scene ahead of AlphaGo’s match with world champion Lee Sedol earlier this month.

Either Hassabis’s copy of the latest Street Fighter didn’t get delivered on time, or he was trying to be a little poetic to mark the occasion. Either way, you’d be forgiven for thinking there really were no games left to conquer after the media reaction to AlphaGo winning the first three games in a best-of-five against its human opponent. It’s been a curious month to be an AI researcher.

Watching the contest, which AlphaGo eventually won 4-1, I’ve learned a lot about Go and one of the most interesting things is how the spaces left empty on the board can often be as important and meaningful as the spaces where stones are played. The history of AI is similarly defined as much by the problems we’ve sidestepped or left out as the ones we’ve pushed on with to completion. There is still a lot of space left to secure, and even more space that we’ve simply never even looked at.

One of the lesser-played corners of that board is the subfield of AI called computational creativity. For the last five years, I’ve worked on a system called Angelina, which designs simple videogames on its own (including some based on Guardian stories). This field recently had its own AlphaGo moment of sorts, as the European What-If Machine project helped generate the premise behind a West End musical, Beyond The Fence. We’re building software that can engage with people creatively, or as we sometimes put it, to exhibit behaviours that observers would describe as creative. So our field is defined in terms of external forces – someone or something else needs to validate our work as creative, we can’t simply beat our opponent into submission and declare ourselves more artistic.

Lee Sedol
Pinterest
The world’s top human Go player, Lee Sedol, reviews the fourth match of the Google DeepMind challenge match. Photograph: Reuters

We have to treat AI like cold, hard science, but we are also compelled to engage with it as a shared social concept. AI is not just the algorithms and the data, the models and the results. It’s our collective understanding, as a society, of the things technology can do, the things it can’t yet do, and then AI – the stuff that happens where those things meet.

This, for me, is the significance of events like the AlphaGo/Lee matchup. Not the slaying of a white whale for AI researchers, but the impact it has on how the public understand what AI is, and what it it is for.

It’s easy to think about AI as simply being a case of being better than humans at things. If you were born in an era before Siri, your first encounter with AI was probably an enemy in a videogame, where its purpose was usually to try and stop you from winning. Complaints about game AI are almost always a request for better AI – we want more, stronger, faster, cleverer, more surprising, more ruthless, more effective. We want to be beaten, challenged, pushed. That’s the narrative.

Like AI’s applications in the real world – predicting, classifying, solving – the worth of an AI is generally measured by how much better it is at its task than a human. Is it any wonder, as a result, that the general public worries about a future in which it replaces us in our jobs, or perhaps simply wipes us out entirely?

While it’s easy to dismiss talk of apocalypses and doom as hype, it’s important to understand the implications the public perception of AI has on society. As we place these new systems on ever-higher pedestals, we risk losing sight of the guiding hand of the humans.

AI is not born in a vacuum. AlphaGo did not will itself into being. Systems are developed by humans, and they inherit human flaws with them – a fact AI practitioners are often in denial about. When those humans are primarily white, male, middle-class computer scientists then that causes further problems. Right now it’s innocuous slip-ups, like not noticing that your selfie analyser is regurgitating your data’s white, western standards of beauty. Soon these systems will be deciding who gets health coverage; who gets parole; who gets their preferred schools; who gets fired. The discourse we are having now about projects like AlphaGo affect how people view modern AI, what people believe AI is for, and ultimately influence how people will invest in and apply it in the future.

When Hassabis describes Go as “the only game left”, I feel in amid the poetic licence is also a lot of presumption about what challenges we as AI researchers choose to go after. DeepMind tells us that StarCraft is their next target – another game mostly about going head-to-head against a human. There are plenty of AI grand challenges out there, however, that we don’t give as much thought to. Rather than Starcraft, could an AI learn to play a team eSport like League of Legends, understanding cooperation and team communication with humans? Could AI learn to be a playmate in a game like Minecraft, and improvise in crafting and co-creation? Can we teach a computer to be as realistically fallible and prone to tricks as your best friend is? Can we look to the unexplored parts of artificial intelligence, the empty spaces on the far side of the board, and think more broadly about what AI can do and be?

Pinterest
AlphaGo computer beats Go champion – video

I’m truly delighted that AlphaGo has managed to beat a world champion at Go – it’s a milestone that has been in the minds of researchers since before the field even existed. But I think we can and should be more ambitious about what we want from AI in order to broaden how the public thinks about technology. It is our duty as practitioners to be responsible in the narrative we set, in order to mediate that strange space between what people understand technology to be capable of, and what is thought to be impossible. As we marshall Go across that gap from impossible to possible, it’s time to look at what’s next. I hope we can make a good choice.

Michael Cook is an AI researcher at Goldsmiths, University of Falmouth