Search Search How artificial intelligence conquered democracy The technology is becoming commonplace in political campaigns, and some even claim it was crucial in delivering Donald Trump to the White House Click to follow The Independent Online I’ve got algorithm: big data is being used to influence people’s emotions Getty/iStock There has never been a better time to be a politician. But it’s an even better time to be a machine learning engineer working for a politician. Throughout modern history, political candidates have had only a limited number of tools to take the temperature of the electorate. More often than not, they’ve had to rely on instinct rather than insight when running for office. Now big data can be used to maximise the effectiveness of a campaign. The next level will be using artificial intelligence in election campaigns and political life. Machine-learning systems are based on statistical techniques that can automatically identify patterns in data. These systems can already predict which US congressional bills will pass by making algorithmic assessments of the text of the bill as well as other variables such as how many sponsors it has and even the time of year it is being presented to congress. Machine intelligence is also now being carefully deployed in election campaigns to engage voters and help them be more informed about key political issues. This of course raises ethical questions. There is evidence, for example, to suggest that AI-powered technologies were used to manipulate citizens in Donald Trump’s 2016 election campaign. Some even claim these tools were decisive in the outcome of the vote. And it remains unclear what role AI played in campaigning ahead of the Brexit referendum in the UK. Artificial intelligence can be used to manipulate individual voters. During the 2016 US presidential election, the data science firm Cambridge Analytica rolled out an extensive advertising campaign to target persuadable voters based on their individual psychology. This highly sophisticated micro-targeting operation relied on big data and machine learning to influence people’s emotions. Different voters received different messages based on predictions about their susceptibility to different arguments. The paranoid received ads with messages based around fear. People with a conservative predisposition received ads with arguments based on tradition and community. This was enabled by the availability of real-time data on voters, from their behaviour on social media to their consumption patterns and relationships. Their internet footprints were being used to build unique behavioural and psychographic profiles. The problem with this approach is not the technology itself but the fact that the campaigning is covert and because of the insincerity of the political messages being sent out. A candidate with flexible campaign promises like Trump is particularly well-suited to this tactic. Every voter can be sent a tailored message that emphasises a different side of a particular argument. Each voter gets a different Trump. The key is simply to find the right emotional triggers to spur each person into action. We already know that AI can be used to manipulate public opinion. Massive swarms of political bots were used in the 2017 general election in the UK to spread misinformation and fake news on social media. The same happened during the US presidential election in 2016 and several other key political elections around the world. These bots are autonomous accounts that are programmed to aggressively spread one-sided political messages to manufacture the illusion of public support. This is an increasingly widespread tactic that attempts to shape public discourse and distort political sentiment. Typically disguised as ordinary human accounts, bots spread misinformation and contribute to an acrimonious political climate on sites like Twitter and Facebook. They can be used to highlight negative social media messages about a candidate to a demographic group more likely to vote for them, the idea being to discourage them from turning out on election day. Technology first: Trump’s presidential campaign team were able to present a different version of him to different voters (EPA) In the 2016 election, pro-Trump bots even infiltrated Twitter hashtags and Facebook pages used by Hillary Clinton supporters to spread automated content. Bots were also deployed at a crucial point in the 2017 French presidential election, throwing out a deluge of leaked emails from candidate Emmanuel Macron’s campaign team on Facebook and Twitter. The information dump also contained what Macron says was false information narrative that Macron was a fraud and a hypocrite – a common tactic used by bots to push trending topics and dominate social feeds. It is easy to blame AI technology for the world’s wrongs (and for lost elections) but the underlying technology itself is not inherently harmful. The algorithmic tools that are used to mislead, misinform and confuse could equally be repurposed to support democracy. AI can be used to run better campaigns in an ethical and legitimate way. We can, for example, programme political bots to step in when people share articles that contain known misinformation. They could issue a warning that the information is suspect and explain why. This could help to debunk known falsehoods, like the infamous article that falsely claimed the pope had endorsed Trump. We can use AI to better listen to what people have to say and make sure their voices are being clearly heard by their elected representatives. Based on these insights, we can deploy micro-targeting campaigns that help to educate voters on a variety of political issues to help them make up their own mind. People are often overwhelmed by political information in TV debates and newspapers. AI can help them discover the political positions of each candidate based on what they care about most. For example, if a person is interested in environment policy, an AI targeting tool could be used to help them find out what each party has to say about the environment. Crucially, personalised political ads must serve their voters and help them be more informed, rather than undermine their interests. The use of AI techniques in politics is not going away anytime soon. It is simply too valuable to politicians and their campaigns. However, they should commit to using AI ethically and judiciously to ensure that their attempts to sway voters do not end up undermining democracy. Vyacheslav W Polonski is a researcher at the University of Oxford. This article was originally published on The Conversation Comments Most Popular Sponsored Features Video We use cookies to enhance your visit to our site and to bring you advertisements that might interest you. Read our Privacy and Cookie Policies to find out more. We've noticed that you are using an ad blocker. Advertising helps fund our journalism and keep it truly independent. It helps to build our international editorial team, from war correspondents to investigative reporters, commentators to critics. Click here to view instructions on how to disable your ad blocker, and help us to keep providing you with free-thinking journalism - for free. Thank you for your support. How to disable your ad blocker for independent. co. uk Adblock / Adblock Plus address bar. for the current website you are on. If you are in Firefox click "disable on independent. co. uk". Firefox Tracking Protection cause the adblock notice to show. It can be temporarily disabled by clicking the "shield" icon in the address bar. Ghostery Trusted Site list. whitelisted". uBlock and its state will be remembered next time you visit the web site. Thank you for supporting independent. co. uk