New probe explains why we can’t handle the truth
As social media giants come under closer scrutiny by governments worldwide hoping to clamp down on the negative political and social effects of fake news, researchers from the Massachusetts Institute of Technology have examined why people seem to prefer falsehoods over facts.
They analyzed how 126,000 stories – some true, others false – tweeted by more than 3 million people over 4.5 million times from 2006 to 2017 dispersed online. By measuring the “cascade” of tweets, the number of retweets a particular tweet spawned, they could see its reach and the rate at which both types of stories unravelled.
The stories were classified as true or false based on evaulations made by six independent fact-checking organizations. The definition of what the researchers defined as news is broader than the traditional meaning, and they accepted any asserted claims made on Twitter as news.
“Whereas the truth rarely diffused to more than 1,000 people, the top 1 per cent of false-news cascades routinely diffused to between 1,000 and 100,000. Falsehood reached more people at every depth of a cascade than the truth, meaning that many more people retweeted falsehood than they did the truth,” the paper said.
It also took genuine stories six times as long to reach an audience of 1,500 compared to fake news. The team also measured the “depth” of a cascade, which they described as “the number of retweet hops from the origin tweet over time, where a hop is a retweet by a new unique user”.
Bots have been blamed for fake news. But the researchers found, after scrubbing the tweets from fake accounts with “two state-of-the-art bot-detection algorithms,” that their results were roughly the same.
Sinan Aral, co-author of the paper and an expert on social networks at MIT, told The Register he was surprised. He “expected bots to play a significant role” in infecting social media with fake news. Instead they simply accelerated the number of retweets but did not really change the magnitude of how many lies were spread.
It’s humans rather than machines that are the root of the problem. People are just less interested in the truth, apparently, as those types of stories never reached beyond a depth of 10. Falsehoods were 70 per cent more likely to be retweeted than the truth, according to some estimates.
It’s no surprise that political news was the most popular category to spread fake rumours. There were clear spikes of fake retweets during the 2012 and 2016 US presidential elections. Other popular topics included urban legends, business, terrorism, science, entertainment and natural disasters.
It all boils down to novelty. People can be manipulated to share knowledge even if it’s false if they considered it new, surprising and useful.
“Novelty attracts human attention, contributes to productive decision-making, and encourages information sharing because novelty updates our understanding of the world. When information is novel, it is not only surprising, but also more valuable, both from an information theoretic perspective (in that it provides the greatest aid to decision-making) and from a social perspective (in that it conveys social status on one that is ‘in the know’ or has access to unique ‘inside’ information),” the paper said.
The researchers concluded that humans are worse offenders than robots. To combat fake news in the future, policies should include methods that dissuade people from pressing the retweet button by labelling tweets.
Aral said news should be labelled like food. “When you go to the grocery store and pick up food, it tells you the calorie content, how much fat, protein, sugar it contains. Where it was grown. But when we consume news, there is no such labelling information.
“We should think about how a story was produced. Does this source tend to produce true or fake news? How many people did they interview? We don’t have that information.” ®