Close panel

Close panel

Close panel

Close panel

Technology Updated: 22 Nov 2016

Internet groups allow algorithms to police a post-truth world

It is tempting to see the US presidential election as a watershed moment for the big US internet companies. Facebook, Twitter and Google have each, in their way, been cast in an unflattering light. They stand accused of helping to circulate fake news and conspiracy theories, most of it favouring the Trump camp, that may have worsened what was already a deeply partisan battle. And the raucous debate on Twitter often tipped into outright harassment and hate speech.

Under attack from outside and within - their deeply liberal workforces were guaranteed to be unsettled by any suggestion they played a part in the Trump victory - they have engaged in soul searching and some quick responses.

Facebook and Google, for instance, have said this week they will stop their advertising being placed on sites that carry fake news. Both companies also said they would look for ways to weed out this particular form of bad content from their sites. And Twitter closed the accounts of a clutch of so-called “alt-right” figures in a crackdown on hate speech.

This has done little to calm concerns about whether their core business models make them complicit in spreading falsehood and stoking prejudice. As long as maximising user engagement is the ultimate business metric, there will always be a potential conflict with the responsibility that comes with running mass media platforms.

Critics argue that if democracy itself is now at risk, then a more interventionist approach to managing content is needed. But this is a movie we have seen many times before, and the ending does not seem in doubt.

The rash of fake political news echoes other plagues that have swept across the online platforms before, including spam, copyright infringement and counterfeit goods. The most obvious precedent is the outpouring of material from so-called “content farms” that threatened to overwhelm Google’s search engine with low-grade articles at the start of this decade. The response in all these cases - adjusting the algorithms to weed out undesirable or infringing content - is the same one that Google and Facebook are now counting on to bar the spread of fake news.

Critics say human editors and curators are needed. This is something the internet companies have resisted, and it is hard to see them bending under the current pressure. For a start, humans are expensive. They would also be guaranteed to antagonise one group or another and bring charges of bias. That might not matter much in the case of a normal news site, but for Facebook, with 1.8bn users who log on at least once a month, the prospect of individuals making decisions that would affect what a large segment of humanity is able to read raises chilling possibilities. Much better to make a sweeping algorithmic change designed to excise a class of abusive content.

But while this sounds straightforward in principle, it cannot solve all the problems. If internet users are predisposed to believe false information that confirms their prejudices - and if they enthusiastically take part in spreading conspiracy theories - then falsehoods may be endemic to mass online communication platforms.

This issue is more difficult for Facebook, whose algorithms rely heavily on a social signal that comes from what a user’s friends are sharing. If its users promote unreliable information - particularly if it is not easily categorised as “news” - it will spread rapidly.

Ironically, it was only in June this year - just as the presidential election was entering its most important phase - that Facebook decided to de-emphasise news and give more prominence to personal material in its users’ newsfeeds. One possibility this raises is that relegating “real” news about the election left a vacuum that was filled by more dubious information - a sort of information age Gresham’s Law in which false news drives out good.

All of this will fuel self-examination for months to come and launch a thousand social science studies into how free and open communication platforms promote tribalism. But it is hardly likely to dent one of the tech world’s best business models: running mass communication platforms that do not, in the last resort, take a stance on the information that billions of people want to access or share on their systems.


For more articles from the Financial Times/ FT.com please register here

Source: Richard Waters 2016. 'Internet groups allow algorithms to police a post-truth world'. Financial Times / FT.com. November 17, 2016 Used under licence from the Financial Times. © The Financial Times Limited 2016.

All Rights Reserved.Articles sourced from the Financial Times have been referenced and are used under licence from The Financial Times Limited. These articles remain the copyright of The Financial Times Limited and were originally published in 2016. All rights reserved. “FT” and “Financial Times” are trade marks of The Financial Times Limited. The Financial Times Limited has not endorsed, verified or been involved in the creation of the information provided from other sources in this publication, and is not responsible or liable for its accuracy, completeness or content.