So, what can motivate someone to share a contrary opinion? In a new paper, Georgy Egorovprofessor of managerial economics and decision science at the Kellogg School, and his colleagues looked at how so-called reasoning can tip the scale.
Arguments are narratives that support a particular point of view that may emerge organically or be promoted by political actors, social movements, and the media. Sometimes, it’s an attempt to convince people of a position on its merits. More often, however, they function as a social cover, a means of making the unpleasant seem more acceptable. It is less of a real argument and more of an excuse for a pre-existing preference, and one that has the advantage of seeming reasonable to others.
Imagine someone who loves pineapple on pizza and who, announcing this highly controversial opinion, publishes an article about the many health benefits of pineapple. This health rationale isn’t about the person’s preference, but it might make them feel less sheepish about admitting it—and encourage other pineapple pizza lovers to show up, too.
In the paper, Egorov and his colleagues find that reasoning makes people more likely to share opinions they would otherwise keep private. Furthermore, the reasoning succeeds in changing the way the audience interprets these different views. In particular, when people use reasoning to share positions that could be attributed to prejudice, audiences are less likely to make this unfavorable connection.
“People speak out and express their preferences based on what they think their audience is and who they think their audience will be,” says Egorov. “What our paper shows is that rationales, or social covers, play an important role in enabling people to share views that are out of line with their audience.”
Reasoning makes it easier to express an unpopular opinion
Egorov conducted the research with Leonardo Bursztyn of the University of Chicago, Ingar Haaland of the University of Bergen, Aakash Rao of Harvard University and Christopher Roth of Oxford University. They began by looking at efforts to defund police in the United States. They chose this particular issue because, despite the popularity of “defund the police” as a slogan, polls show that Only 25 percent of Democrats support cutting police budgets in their districts.
However, the researchers write, because the divestment movement aligns with concerns about racial injustice, “it seems … Would a logic make it less uncomfortable?
To find out, the researchers recruited a group of 1,122 Democrats and Independents who had active Twitter accounts and agreed to install an app that would be allowed to send tweets from their accounts.
As part of the experiment, participants were given feedback from The Washington Post
in which the author, a Princeton University criminologist, argued against defunding the police, citing a substantial body of evidence showing that increased policing reduces violent crime.
After reading the article, participants were asked whether they would privately join a campaign to oppose police defunding. About half said no. The 529 participants who agreed to participate in the campaign continued with the study and, importantly, were shown the same article again.
These remaining participants were split into two groups and presented with one of two tweets. For some participants, in what researchers call the unmasked group, the tweet read: “I have joined a campaign to oppose police defunding. After joining the campaign, he showed me this article written by a Princeton professor about the strong scientific evidence that defunding the police would increase violent crime,” followed by a link to the article. In the cover group, the tweet indicated that the participant had seen the article before joining the campaign. Participants were then asked whether or not they wanted to publish the tweet.
Both versions of the tweet were accurate—participants saw the article both before and after agreeing to participate in the campaign—but conveyed subtly different messages. The tweet to the cover group suggested that participants might have been persuaded to join the campaign by the compelling scientific evidence they had seen in the article, while the tweet to the no-cover group suggested that they were already opposed to defunding the police before seeing the article .
This one-word difference had a significant influence on participants’ willingness to voice their opposition to taking away police funding, the researchers found. In the cover group, 70 percent of participants authorized the tweet, compared to 57 percent in the no-cover group.
Interpretive Intent
In a second experiment, the researchers tested whether logic, like the argument in the Washington Post, would change how tweets are received. In particular, the researchers suspected that reluctance to publicly oppose police defunding might stem from concerns about appearing racially biased, and wanted to understand whether rationales offered any protection against this perception.
A new group of 1,040 Democrats and Liberal Independents were recruited, split in half and told they had been matched with another participant from a previous study. Participants then saw a tweet ostensibly from their matched partner—either the cover or a no-cover tweet from the previous experiment.
The researchers told participants that their partner had the opportunity to authorize a $5 donation to the NAACP, and then asked them to guess whether their partner had authorized the donation or not. If participants believed their partner donated to the NAACP, an organization that fights for racial equality, they believed their partner was not prejudiced, despite their opposition to siphoning police resources. Participants were also asked whether they would approve of a $1 bonus to their partner — a measure of whether they would socially sanction tweeters for holding a controversial view among Democrats.
The use of the article by The Washington Post as the rationale for providing coverage changed how participants interpreted tweeters’ motivations. Among participants who saw the tweet without a cover, just 27 percent assumed their partner had donated to the NAACP, compared to 35 percent in the cover group. Accordingly, 47 percent of participants in the no coverage group denied their partner a $1 bonus, compared to 40 percent of participants in the coverage group.
“The pattern is clear,” says Egorov. “Participants in the first experiment were more hesitant to authorize a tweet without social coverage because they feared social punishment, and they were right. They probably thought that this would imply their racial bias, and they were right too. The behavior of the person posting the tweet and the reactions of the public — it all matches.”
How Logics Shape Outgroup Perceptions
The first two experiments focused on in-group perception: that is, how liberals perceive fellow liberals who express opposing views. To expand on their findings, the researchers decided to study out-group perceptions as well.
In their next two experiments, they focused on a conservative attitude stigmatized by liberals and even some Republicans: the belief that all Mexican immigrants living in the country illegally should be immediately deported.
The researchers used a setup identical to the first experiment, recruiting a group of 1,130 Republicans and right-wing Independents and showing them an excerpt from Tucker Carlson Tonight in which Carlson presents statistics from the US Sentencing Commission to argue that illegal immigration is linked to violent crime. The 517 participants who agreed to publicly voice their support for immediate deportation were presented with a tweet suggesting they had seen the Tucker Carlson clip either before or after agreeing to join the movement.
As before, significantly more participants in the cover group—65 percent—were authorized to send the tweet, compared to 48 percent in the no-cover group, suggesting that participants believed this tweet would be more palatable to their audience. .
To see how these tweets would be interpreted by an out-group audience, the researchers recruited 1,082 Democrats and left-leaning Independents—the people most likely to strongly oppose the message—and used the same experimental setup as before.
Because the researchers believed that the reluctance to openly support deportation stemmed from concerns about appearing anti-immigrant, they examined whether rationales protect against this perception. As in the previous study, participants guessed whether their partner had approved a donation to the US Border Crisis Children’s Relief Fund—an organization that supports immigrant children—and also chose whether to approve or deny a $1 bonus to their partner.
Using a rationale affected both measures of audience perception, the researchers found. In the coverage group, 13.4 percent of participants believed their partner had donated to the relief fund, compared to 8.5 percent in the no coverage group. Seventy-four percent of participants in the coverage group denied their partner a bonus, compared to 80 percent in the no coverage group.
Rationale, fake news and misinformation
It may seem strange that liberals would give any credence to a Tucker Carlson clip as logic. But it’s important to remember that “arguments don’t have to be convincing,” Egorov says — they just have to convince the audience that someone else could have been convinced. Carlson’s show is popular and easy to find, and it’s possible to imagine someone else finding it and being influenced by it, even if it’s not you. This, in turn, makes it easier to imagine that the person is not necessarily anti-immigrant. In other words, Egorov explains, reasoning “makes it more difficult to infer the real reasons for adopting a particular opinion. They introduce noise.”
Egorov believes that the quality of reasoning that muddies the water helps, in part, to explain the power of misinformation online.
“If you consider the role of fake news not so much as persuasive but as a social cover, it explains why misinformation can have the power to influence people without persuading them,” he says. There is a kind of snowball effect: posting misinformation as logic makes it more acceptable to express a stigmatized view and encourages other people to express it as well. Over time, a view that once seemed fringe can move into the mainstream.
The research also suggests a possible solution for this. Several social media sites have experimented with labeling misinformation as such. Egorov and colleagues suggest that this would have the effect of showing the public that the original poster knew the information was false and chose to share it anyway, possibly reducing the degree of social coverage. And without that protection, fake news is no longer good sense — it’s just false.