In the months leading up to the November elections, Hatim Rahman, assistant professor of management and organizations at the Kellogg School, is keeping a close eye on Facebook and other social media platforms like Twitter, YouTube and WhatsApp to see how they handle misinformation this time around. And the trends he sees concern him.
“Thanks to increasingly powerful algorithms, the speed and scale at which disinformation can spread is unprecedented,” says Rahman.
Rahman highlights three reasons why misinformation on social media is such an intractable challenge – and what that might mean in the future.
Covered sources
A disadvantage of the major social media platforms is that it is often difficult for users to identify the sources of the information that feeds them into their feeds.
Most social media users would agree that it’s important to know who’s producing misinformation—but until now it’s been difficult for users, regulators, or even the platforms themselves to track where the messages are coming from, much less why they’re being produced . Lonely people in their basements? A strain of foreign trolls, like in 2016? Or networks of organizations?
Part of the problem is that most platforms do not require posters to identify themselves before spreading information. Nor, for that matter, do the platforms verify whether the information shared is accurate. Knowing this, individuals and organizations often design propaganda to spread across platforms before its origin can be identified.
Research on the main sources of misinformation is in progresssays Rahman, but he notes that it’s becoming increasingly clear that political disinformation is often seeded and spread through surprisingly coordinated, well-funded campaigns that may prefer to stay under the radar.
“It’s usually very well covered, especially because coordinated campaigns may want to give the impression that their message is grassroots,” says Rahman. “Or if prominent financiers are involved, they don’t want those ties revealed.”
Of course, if social media platforms wanted to, they could require more stringent identity verification processes, thus ensuring that sources are actually who they say they are. Google did this for its advertisers, for example. And Twitter already has a mechanism for verifying users—the blue checkmark badges—but Twitter reserves that feature for what it describes as “public interest accounts.” This leaves the vast majority of Twitter accounts unverified.
The wrong motives
Why are social networks so reluctant to take action on misinformation and propaganda?
“If you look at Facebook and Twitter — the platforms themselves — they knew to some extent that these things were happening,” Rahman explains. “They’re just motivated differently” than many users and regulators.
What Rahman means is that ignoring misinformation serves the interests of these sites to some extent. Ultimately, shareholders reap rewards when user numbers grow and viral content spreads and research has shown that divisive content tends to gain more engagement. This set of conditions gives platforms a perverse incentive to look down on fake, violent or racist content.
“Viral content pays,” says Rahman. “It attracts more advertising, more eyeballs, more time and attention on the platforms. But we’ve seen serious trade-offs in maximizing this type of metrics for platforms.” For example, a study finds that, of the top 50 most popular Facebook posts that mentioned postal voting, an essential part of our electoral infrastructure, 44% contained misinformation.
Willing Distributors
With platforms heavily focused on growing percentages of users, the responsibility for discerning the accuracy of posts rests heavily on users. Rahman sees this as unfair, especially given that users are at an information disadvantage. Not only that, the only real leverage users have to demand change is to vote with their feet by deleting their accounts and leaving the platform.
Delegating this responsibility to users is also unlikely to be effective, partly because of users’ predisposition to believe what they want to believe, and partly because of an even more troubling phenomenon: users simply don’t care if something is true.
In this regard, Rahman sees a worrying trend: although users are becoming more sophisticated in judging whether online content is accurate, some users are also becoming more comfortable spreading misinformation voluntarily when they agree with the underlying message it’s trying to convey. Are also less likely to verify sources or fact-check posts that support their worldview.
“What sometimes gets lost is that some people don’t really care about the source or the accuracy of the information,” says Rahman. “As long as it aligns with their political point of view, they will spread it.”
According to Rahman, this behavior represents a change in the way we think about the misinformation that has been spread since the last presidential election. In 2016, many believed that most users shared information they believed to be true and were only vaguely aware of online trolls, “fake news” and hackers. Today, people are more aware of online cheating. But many users still perpetuate it.
Rays of Hope
So if platforms don’t act to mitigate misinformation—and individuals tend to spread it—what should? It’s not entirely clear, Rahman says.
While regulating complex, rapidly developing, global technologies can be difficult for lawmakers, there is still a role for policy that balances accountability and consumer protection with free speech concerns.
“The role of regulation is to incentivize and hold responsible organizations to be more proactive, rather than telling platforms what to do,” says Rahman.
However, Rahman sees a glimmer of hope that the platforms themselves may be coming around to address the problems with misinformation.
Given the storm of COVID-19, the explosion of Black Lives Matter protests after the death of George Floyd, and the November election, it appears that platforms’ tolerance for misinformation may be shifting.
For example, Twitter added “Get the Facts” tags to potentially misleading informationincluding President Trump’s tweets on California’s plans for mail-in voting. He also placed a warning in one of his tweets after the Minneapolis protests about glorifying violence. Facebook is currently facing an unprecedented push — including a boycott by major advertisers — to take similar action.
Platforms could also reconfigure their algorithms to prioritize information that is accurate, from sources that can be easily identified.
“AI can be seen as a tool that is neither good nor bad. Depending largely on how they use it, it reveals the values and intentions of an organization,” says Rahman.
With so much at stake in a short period of time, it’s “all hands on deck” in what he calls a “push and pull” moment where all stakeholders have a role to play.
“We need researchers for their rigorous interdisciplinary work problem solving approaches. We need community organizations for their ability to voice concerns from underrepresented groups. We need users for their experiences and governments for their regulatory powers. We need it all to come together in ways that are necessary, but the platforms have so far resisted.”