Search Search Why does it matter that Google’s DeepMind computer has beaten a human at Go? South Korea’s Lee Sedol (R), the world’s top Go player, shakes hands with Demis Hassabis, the CEO of DeepMind Technologies and developer of AlphaGO, after a news conference ahead of matches against Google’s artificial intelligence program AlphaGo / REUTERS/Kim Hong-Ji The Big Question: A computer’s mastery of arguably the most complex game in the world is a major step forward for artificial intelligence 6169789578 Click to follow The Independent Tech Why are we asking this now? A computer has won a game of Go against a person — much to many humans’ surprise. -- Computers have beaten humans at almost every game before, but none of them have had the same kinds of complexity. Winning at chess, for instance, was a huge achievement — but one of more traditional computing than artificial intelligence. Go is thought to be one of the most complicated games that there is. Because of that there is an almost infinite number of moves — meaning that it is a game more of intuition than calculation. In pictures: Artificial intelligence through history In pictures: Artificial intelligence through history Boston Dynamics describes itself as 'building dynamic robots and software for human simulation'. It has created robots for DARPA, the US' military research company Google has been using similar technology to build self-driving cars, and has been pushing for legislation to allow them on the roads The DARPA Urban Challenge, set up by the US Department of Defense, challenges driverless cars to navigate a 60 mile course in an urban environment that simulates guerilla warfare Deep Blue, a computer created by IBM, won a match against world champion Garry Kasparov in 1997. The computer could evaluate 200 million positions per second, and Kasparov accused it of cheating after the match was finished Another computer created by IBM, Watson, beat two champions of US TV series Jeopardy at their own game in 2011 Apple's virtual assistant for iPhone, Siri, uses artificial intelligence technology to anticipate users' needs and give cheeky reactions Xbox's Kinect uses artificial intelligence to predict where players are likely to go, an track their movement more accurately The ancient Chinese game of Go is nearly 3,000 years old and immensely challenging. Players take turns putting black and white stones onto a gridded piece of wood, with the ultimate aim being taking over the full board. -- How did AlphaGo get so good at it? As with much work in artificial intelligence, Google’s DeepMind team trained the computer using a system of trial and error. The computer uses “reinforcement learning” — a development remarkably similar to the way that humans learn. -- What does it mean for us? Intuition — like other kinds of human traits — is one of the key parts of artificial intelligence. We have managed to gather together huge amounts of computing power, and the current challenge for many engineers is making those computers learn, think and understand like humans do.