Policymakers, regulators, and other stakeholders in content control have proposed a variety of methods to address or prevent the spread of misinformation on social media. For example, there are accuracy nudges that subtly remind people to watch out for misleading information. There are efforts to debunk misinformation head on. And there are “preemptive” methods that proactively address lies to help build the public’s resilience to misinformation.
In large part, these strategies are based on the idea that people generally care about the accuracy of information, he says William Bradyassistant professor of management and organizations at the Kellogg School. However, certain types of content are particularly good at making people overlook accuracy.
“There’s a category of misinformation that we have to pay particular attention to,” he says, “because of the very fact that it tends to put us in a state of motivation where we’re not actually going to pay as much attention to accuracy. And that would be misinformation that provokes moral outrage.”
The relationship between misinformation and outrage—and how it affects people’s behavior on social media—is the subject of new research by Brady and his colleagues Killian McLoughlin, Ben Kaiser and MJ Crockett of Princeton. Yale’s Aden Goolsbee? and Kate Klonick of the University of St. John’s.
In ten studies analyzing more than a million social media posts, researchers find that misinformation is more likely to cause outrage than reliable news.
Outrage, in turn, leads people to share or retweet these often misleading social media posts. And it makes them more willing to do so without actually reading the article linked in the post, even if they are otherwise good at spotting misinformation.
“We actually find that people are not terrible at distinguishing between misinformation and credible news,” says Brady. “But here’s the key: if you give them an outrage article with misinformation, that ability they have, it goes out the window. they’re more likely to share it anyway.”
Misinformation and outrage
For their research, Brady and colleagues conducted eight observational studies across two social media platforms, multiple time periods (2017 and 2020–2021), and similar networks of people. They examined 1,063,298 Facebook posts and 44,529 tweets from 24,007 people on X (Twitter at the time of the study), covering a range of topics. Each post contained a link to a news article.
They classified a post as disinformation if it was linked to a “low-quality” news source known to produce false or misleading content, based on reports from independent fact-checking organizations.
Instead, they considered a post credible if it was rated as coming from a “high-quality” source. Sources they rated as misinformation were six times more likely to produce content that was tested as false or misleading.
The researchers also conducted two experiments involving approximately 1,500 participants to assess either the accuracy of twenty news headlines or the likelihood that they would share the news.
“All of these factors, when you combine them, cover a wide range of data that sets the study apart,” says Brady. “It was very ambitious in its scope.”
Groupthink
Through these tests, three notable findings emerged about misinformation on social media.
First, “misinformation is more likely to provoke moral outrage than accurate information,” says Brady.
Misinformation was more likely than credible news to lead to outrage on both social media platforms, regardless of audience size. On Facebook, misinformation received more angry emojis than credible news, while on X it generated more outraged comments, according to machine learning-based sentiment analysis.
Second, when people feel outraged by news on social media, they are more likely to share it with others. “Outrage combined with the context of social media tends to put people into this kind of impulsive sharing,” says Brady.
This is true whether a post contains misinformation or not. People were more likely to share a post on Facebook the more angry reactions it got, and they were more likely to retweet a post X the more outrage it generated in the comments.
And finally, outrage increases people’s willingness to share posts without vetting them for accuracy.
The more angry reactions a Facebook post got, the more willing people were to repost the news without reviewing it first. The effect was stronger for posts linked to misinformation than to credible sources.
Notably, one of the behavioral studies confirmed that people were able to correctly distinguish between credible news and misinformation, regardless of how much outrage they caused. In other words, outrage does not necessarily make people less able to spot misinformation. Instead, people seem willing to overlook the accuracy of social media content when they feel outraged, in part because the emotion puts them in a “group-identification mindset.”
“When we’re in a context where our political group identities become apparent, it starts to make us think of ourselves more in terms of group resentment than in terms of the self,” says Brady. “That’s why people think less about accuracy and are more likely to express anger on behalf of the group.”
Countermeasures
One implication of these findings is that many of the existing measures to reduce or combat misinformation on social media, such as accuracy nudges, may be less effective when it comes to content that provokes outrage.
Instead, the results show that people are often willing to share misinformation on social media—especially when it causes outrage—because of their political affiliation or moral stance. They could always defend themselves by claiming that they only meant to highlight that the content is “outrageous if true,” according to the researchers. In this way, people can take advantage of misinformation that causes outrage to gain more engagement or visibility on social media.
Even people who disagree with the misinformation they see on social media may unwittingly help promote the content simply by engaging with it, Brady notes. “The disinformation ecosystem is not only driven by user behavior, it’s also driven by algorithms,” he says. “When you engage in misinformation — even in arguments — you’re actually helping to increase misinformation in the ecosystem because you’re letting the algorithm know that it’s creating a jam.”
For policymakers, the research offers specifics to consider when designing solutions to disinformation. This is an effort that becomes especially important when politics is at the center.
“Content moderation is always a big deal during political seasons,” says Brady. “If you want to predict the misinformation that’s most likely to spread through a network, then you have to measure its potential to cause outrage among political groups … because that’s the misinformation that spreads the most and that people don’t give much thought to. .”