How much should we fear the rise of artificial intelligence?

From the games program AlphaGo to the movie 2001, we are often warned of the threats posed by computers. But there is a way to live alongside technology
Stanley Kubrick’s 2001: A Space Odyssey.
Stanley Kubrick’s 2001: A Space Odyssey. Photograph: MGM/Everett//Rex Features

How much should we fear the rise of artificial intelligence?

From the games program AlphaGo to the movie 2001, we are often warned of the threats posed by computers. But there is a way to live alongside technology

Machines, four. Humanity, one. That was the result of the match between Google’s AlphaGo and human champion Lee Sedol at the fiendishly complex game of Go, and it came with a disconcerting question: what next? Where will the machines claim their next victory: putting you out of a job; solving the mysteries of science; bettering human abilities in the bedroom?

AlphaGo’s success was down to artificial intelligence (AI): the computer program taught itself how to improve its game by playing millions of matches against itself. But the trouble with using games such as chess and Go as measures of technological progress is that they are competitions. There’s a winner and there’s a loser – and this month’s biggest tech news story had a clear victor.

This is a common narrative of human-machine interactions: a creation is pitted against its creators, aspiring ultimately to supplant them. Science fiction is full of robots-usurping-humans stories, sometimes entwined with a second strand of anxiety: seduction.

Machines are either out to eliminate us (Skynet from Terminator 2, Hal in 2001: A Space Odyssey), or to hoodwink us into a state of surrender (the simulated world of The Matrix, the pampered couch potatoes of WALL-E). On occasion, they do both. These are just stories, but they’re powerful and revealing – and easier to grasp than what’s actually going on.

According to a YouGov survey for the British Science Association of more than 2,000 people, public attitudes towards AI vary greatly depending on its application. Fully 70% of respondents are happy for intelligent machines to carry out jobs such as crop monitoring – but this falls to 49% once you start asking about household tasks, and to a miserly 23% when talking about medical operations in hospitals. The very lowest level of trust comes when you ask about sex work, with just 17% trusting robots equipped with AI in this field – although this may be a proxy for not trusting human nature very much in this situation either.

The results closely map the degree of intimacy involved. Artificial intelligence is OK at a distance. Up close and personal, however, the lack of a human face counts more and more. All of which both makes intuitive sense, yet leaves a pressing question unaddressed: just what does it mean for a machine to carry out a task in the first place?

Here the image of a robot stepping into the shoes of a human worker couldn’t be more wrong. When it comes to technology’s most significant applications, we are neither usurped or seduced – because the systems involved are nothing like us in either their function or faculties. As a species, we are not in competition with information technology at all: we are, rather, busily adapting the fabric of our world into something machines can comprehend.

Consider what it means to teach an autonomous robot to do something as simple as mowing grass. First, you take a long wire and lay it carefully around the borders of your lawn. Then you can set your mower loose. It doesn’t know or care what a lawn is, or what mowing means: it will simply criss-cross the area bound by the wire until it has covered all the ground. You have successfully adapted an environment – your lawn – into something a machine understands.

I’ve borrowed this example from the philosopher of technology, Luciano Floridi, who in his book The Fourth Revolution explores the degree to which we have radically adapted most of the environments we work and live within so that machines are able to grasp them. We have, he notes, “been enveloping the world around [information technologies] for decades without fully realising it” – wrapping everything we do in layers of data so dense that they can no longer be comprehended outside of machine memory, speed and pattern-recognising power.

Pinterest

I say comprehended, but AlphaGo no more understands the game of Go than a robot mower understands the concept of a lawn. What it understands is zeroes and ones, and the patterns that can be drawn from their prodigiously smart crunching. We translate, the machine iterates and performs. Increasingly, machines translate for other machines, carrying on their data exchanges without our intervention.

When the arena is something as pure as a board game, where the rules are entirely known and always exactly the same, the results are remarkable. When the arena is something as messy, unrepeatable and ill-defined as actuality, the business of adaptation and translation is a great deal more difficult.

Let us imagine, Floridi suggests, two people in a relationship. One is extremely stubborn, inflexible and unwilling to change. The other is the opposite: adaptable, empathetic, flexible. It doesn’t take a genius to see how things will develop. When one person is willing to compromise and the other isn’t, more and more tasks end up being done the way the uncompromising partner insists. The flexible partner will eventually adapt their entire life around the inflexible partner’s insistences.

When it comes to human-machine interactions, even the smartest AI is orders of magnitude more inflexible than the most intransigent human. We either do things the way the system understands, or we don’t get to do things at all. Hence one of the most useful phrases to enter popular culture in the past 15 years, “computer says no”. It comes from a sketch in the comedy series Little Britain, and will provoke groans of recognition from anyone ever flummoxed by a system that doesn’t recognise their wishes as an option.

“Computer says no,” mumbles a morose employee in response to a perfectly reasonable request, assaulting her keyboard with a single digit. It doesn’t matter what a million people might want – if the option isn’t on the menu, it might as well not exist.

In social science, this is sometimes known as minority rule. Just 5% of a population can, for instance, remove a particular choice from everyone else through inflexibility. If I’m cooking for 100 people and I know five of them are lactose intolerant, I will cook something that suits everyone; if there are a couple of vegans coming and I don’t have the capacity to make multiple dishes, I’ll rule out even more kinds of food.

In an era where machines are implicated in more and more of our most intimate decisions, the minority whose rules apply are those designing machines in the first place. Even the smartest AI will relentlessly follow its code once set in motion – and this means that, if we are meaningfully to debate the adaptation of a human world into a machine-mediated one, this must take place at the design stage.

By the time it gets to “computer says no”, it’s too late. The technology is in place, its momentum gathering. We need to negotiate our assent and refusals earlier, collectively.

And for this negotiation to work, we must ask what it means to translate not only productivity and profit but also other values into a system’s aims and permissions: justice, opportunity, freedom, compassion. “Humanity says no” isn’t a phrase for our age, yet. But it may need to become one.