Skip to main content

Economists Have a Method for Reducing Fake News on Social Media

(And it has nothing to do with fact-checking)

illustration of network of people
To limit misinformation, some social networks have limited how widely users can share posts. Image by Gerd Altmann via Pixabay.

Controlling the spread of misinformation on social media platforms has spurred important conversations about censorship and freedom of speech.

“A tacit assumption has been that censorship, fact-checking and education are the only tools to fight misinformation,” says Duke University economist David McAdams. In new research published in the journal of the Proceedings of the National Academy of Sciences, McAdams and collaborators explore ways to improve the quality of information being shared on networks without making any entity responsible for policing content and deciding what is true and false.

The model suggests that to cut down on the spread of false information, the network can set limits on how widely certain messages are shared, and do so in a way that is not overly restrictive to users.

David McAdams talking on stage “We show that caps on either how many times messages can be forwarded (network depth) or the number of others to whom messages can be forwarded (network breadth) increase the relative number of true versus false messages circulating in a network, regardless of whether messages are accidentally or deliberately distorted,” McAdams says.

“For example, Twitter could limit the breadth of sharing on its site by limiting how many people see any given retweet in their Twitter feeds,” he says.

Both Facebook and WhatsApp, two apps owned by parent company Meta that allow users to message each other, have used methods similar to the researchers’ model to limit the spread of misinformation.

In 2020, Facebook announced limits on how many people or groups users could forward messages to, capping it at five, in part to combat misinformation about COVID-19 and voting. Earlier that year, WhatsApp introduced similar limits, prohibiting its more than two billion users from forwarding messages to more than five people at once, in part because of more than a dozen deaths that public officials in India have linked to false information that was spreading on the app, the researchers noted.

This approach does not eliminate misinformation, but in the absence of other methods, it can reduce the severity of the issue until other solutions can be developed to address the heart of the problem, McAdams says.

“When misinformation spreads through a social network, it can cause harm,” says McAdams, who has faculty appointments in the economics department and the Fuqua School of Business. “Some people might start believing things that are false and that can harm them or others.”

It can also cause some people to lose trust in the platform, which means they may be less likely to believe or take action on correct information that actually could help them or other people, he says.

“If you limit sharing, you could also be limiting the spread of good information, so you might be throwing the baby out with the bathwater and that doesn’t really help you,” McAdams warns. “Our analysis explores how to strike that balance.”

(Stanford University economist Matthew Jackson and Cornell University economist Suraj Malladi co-authored the research with McAdams.)