Massive Study of Fake News May Reveal Why It Spreads So Easily
by Dom Galeon
Fake NewsThe problem of so-called fake news is well known, yet we seem no closer to solving it. Social media is a major source of these falsehoods. Twitter, in particular, is responsible for much of their spread, so it doesn’t help that the platform’s executives recently dropped the ball, so to speak, on the whole issue.
Now, researchers from the Massachusetts Institute of Technology (MIT) are taking a look at the issue in one of the largest studies to date. Their findings suggest that humans – not bots – are largely to blame.
For their study, appearing in the March 2018 issue of the journal Science, the MIT team attempted to make sense of how and why fake news and misinformation spreads fast via Twitter. Specifically, they investigated how mechanisms in Twitter, coupled with peculiarities in human behavior on social media, make it easy for fake news to spread.
For their study, the team looked at a sample of some 126,000 bits of “news” tweeted by 3 million people more than 4.5 million times between 2006 and 2017.
“We define news as any story or claim with an assertion in it and a rumor as the social phenomena of a news story or claim spreading or diffusing through the Twitter network,” they wrote in the study. “That is, rumors are inherently social and involve the sharing of claims between people. News, on the other hand, is an assertion with claims, whether it is shared or not.”
Next, the researchers separated the news into two categories: false and true. To do this, they used six independent fact-checking organizations whose classifications showed a strong agreement.
Spreading Like Wildfire
After that, they examined how likely a piece of news was to create a “cascade” of retweets on the social networking platform.
Surprisingly, news categorized as false or fake was 70 percent more likely than true news to receive a retweet. “Political” fake news spread three times faster than other kinds, and the top 1 percent of retweeted fake news regularly diffused to at least 1,000 people and sometimes as many as 100,000.
True news, on the other hand, hardly ever reached more than 1,000 people.
The researchers also found a connection between the “novelty” of a bit of news and the likelihood that a Twitter user retweeted it.
In a study of 5,000 users, they looked at a random sample of tweets each user may have seen in the 60 days prior to retweeting a rumor. According to their analysis, false news was more novel than true news, and users were far more likely to retweet a tweet that was “measurably more novel.”
The emotional response a tweet generated also played a role in user engagement. Fake news generated replies showing fear, disgust, and surprise. True news inspired anticipation, sadness, joy, and trust. These emotions could play a role in a person’s decision to retweet a piece of news.
This spreading of misinformation isn’t due to bots, either – Vosoughi and his team used an algorithm to remove all the bots before conducting their analysis. When they factored the bots into the study, the researchers found that the bots didn’t distinguish between fake news and the truth.
“Contrary to conventional wisdom, robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it,” they wrote in the study.
Just the Beginning
The MIT study isn’t the only fake news-related piece in the March 2018 issue of Science. It also includes a separate Policy Forum article co-authored by Filippo Menczer, a professor in the Indiana University School of Informatics, Computing, and Engineering.
In that article, Menczer and a number of other researchers, scholars, and scientists call for more large-scale scientific investigations into fake news, like this new study from MIT.
“What we want to convey most is that fake news is a real problem, it’s a tough problem, and it’s a problem that requires serious research to solve,” said Menczer in a press release.
While the political repercussions of fake news are quite obvious, the phenomenon has affected various other discussions. As Menczer and his colleagues point out in their commentary, topics of concern to the public, such as vaccinations and nutrition, are susceptible to fake news, too.
“The challenge is there are so many vulnerabilities we don’t yet understand and so many different pieces that can break or be gamed or manipulated when it comes to fake news,” Menczer said in the press release. “It’s such a complex problem that it must be attacked from every angle.”
A good place to start that attack is with more studies like the one out of MIT.
A scourge is killing people’s minds, according to Apple CEO Tim Cook, and the world needs a massive campaign to stop it. Across the nation, people lament its rise, and the threat it poses to America. Opioids? ISIS? Nope. “Fake news.” Even homicidal dictators agree things have gotten out of control.
“There’s been a lot of speculation about the effect of fake news and a lot of numbers thrown around out of context, which get people exercised,” said Duncan Watts, a research scientist at Microsoft who has argued that misinformation had a negligible effect on the election results.
Back in 2013, when Soroush Vosoughi was a graduate student at MIT, he’d just left the lab when news of the Boston Marathon bombing broke. As the city entered a lockdown while authorities hunted the killers, he remained glued to Twitter, searching for updates.As the week wore on, and media attention turned to the many inaccuracies that had surfaced on Twitter and Reddit in the wake the carnage.
Other stories you may like