Artificial intelligence (AI) - the idea that machines and software can think and act like humans - isn’t new. Back in the early years of computing, Alan Turing predicted some of the ethical questions around AI that we still wrestle with today. Meanwhile AI has been widely portrayed in movies and science fiction over the years.
For example, there was the psychotic HAL in 2001: A Space Odyssey, the humanoids who attacked their human masters in I, Robot and, of course, The Terminator, where a robot is sent into the past to kill a woman whose son will end the tyranny of the machines.
To Professor Stephen Hawking, one of Britain's pre-eminent scientists, efforts to create thinking machines pose a threat to our very existence. He told the BBC in a recent interview: "The development of full artificial intelligence could spell the end of the human race." However, he conceded that elements of basic AI had been useful for him personally. The theoretical physicist, who has the motor neurone disease amyotrophic lateral sclerosis (ALS), is using a new system developed by Intel to speak.
This also uses technology from British company SwiftKey, which learns how the professor thinks and suggests the words he might want to use next, much like the SwiftKey keyboard app many of us use. Ben Medlock, co-founder and CTO of SwiftKey, says: “The problem SwiftKey solves is the hassle of typing on mobile phones. We use AI to learn from individual users; our apps understand the way people use language and continually adapt, autocorrecting even the most unique words and phrases and predicting what you’ll type next.” He adds: Our algorithm learns from and adjusts to your writing style, even if you’re juggling up to three languages simultaneously.”
Nor is SwiftKey the only business which has attracted attention for its timesaving use of AI. More AI-based technologies are permeating our experiences - from predictive apps that learn from each user and anticipate future behaviour to Google’s intelligent personal assistant, Google Now. AI is also proving useful for companies that need “big data” to make important decisions. For example, in 2012 Google scientists built an AI system that started to behave like a human web browser, analysing more than 10 million random YouTube thumbnails of cats over three days. And in 2014 Google spent £500 million acquiring London-based AI company Deep Mind - a company which has created a neural network that learns how to play video games as well as a computer that appears to mimic the short-term memory of the human brain.
Another area where AI could play an important part is in science. One of the apps that IBM is currently working on for its AI supercomputer “Watson” is a medical diagnosis tool that can predict the likelihood of a particular disease given the symptoms you tell it. (Watson was originally developed to win the game show Jeopardy! against human contestants; its predecessor, Deep Blue, took on chess champion Gary Kasparov and won a six-game match.) Although the technology isn’t yet available to patients directly, IBM provides partners with access to Watson's intelligence, helping them to develop user-friendly interfaces for doctors and hospitals.
Speaking to Wired magazine, Alan Greene, chief medical officer of Scanadu, a start-up that is building a diagnostic device inspired by the Star Trek medical tricorder, said he thought this was the future. “I believe something like Watson will soon be the world's best diagnostician, whether machine or human.”
While some may see developments in AI as apocalyptic, it’s clear the technology has huge potential to benefit business and mankind as a whole. However, the more extreme interpretation of AI - where a machine can pass itself off as a human being or think creatively - is in all likelihood decades from becoming a reality, if at all.
You may also be interested in:
Connected car – what can it actually do? »