While the presence of bots on Twitter is undeniable, The scale of the problem is difficult to determine. Figuring out whether something was posted by a bot account or a real user can be surprisingly complex.
But the presence of bots, no matter how many there are, has significant implications for how all social media content is received, according to a new study by Adam Waitz, professor of management and organizations at the Kellogg School. The research, written and led by Shane Schweitzer of Northeastern University and Kyle Dobson of the University of Virginia (both former Kellogg postdocs), finds that partisanship affects whether posts are perceived as being created by humans or bots.
“When you see something that disagrees with you, you’re more likely to attribute it to a bot than to people who agree with it,” Waytz explains. And believing that content was written by a robot, the study shows, “makes you reject information”—a cycle that can contribute to further political conflict.
Understanding bot bias policy
The researchers conducted several experiments to understand how partisanship interacted with perceptions of tweet generation.
In the first, they recruited 491 Americans for an online study. Participants first read about the existence of Twitter bots and then indicated whether they personally identified more as a Democrat or a Republican.
Participants then saw four actual tweets from media organizations, each followed by two responses — one stereotypically liberal and the other stereotypically conservative. Participants then rated from one to seven the degree to which they believed each answer came from a real human or a bot.
Both Republicans and Democrats saw tweets from their political enemies as more robotic than their political counterparts. On the human-to-bot scale of one to seven, Republicans rated conservative tweets as less bot-like (3.82) than Democrats (4.69), while Democrats rated liberal tweets as less bot-like (3 ,17) from the Republicans ( 3.69). The researchers called this pattern “bot political bias.”
The researchers noticed another interesting trend: conservative tweets were generally more likely to be attributed to bots than liberal ones. The perception is partly accurate, explains Schweitzer. “There is some evidence that bot-generated content tends to be conservative,” he says. Even so, “Democrats viewed conservative tweets as more robotic than Republicans,” suggesting that partisan bias may play a role.
Comparison of bots and humans
While the first experiment revealed the existence of bot partisan bias, it did not directly examine how people perceived human-generated versus bot-generated tweets. Would partisan bias for bots still hold, the researchers wondered, with real human- and bot-generated tweets? So, to answer that question, they put together a selection published before the 2016 election.
They presented a new pool of 498 participants with tweets about each of four topics that were likely to elicit partisan responses: Trump, Clinton, Black Lives Matter, and Make America Great Again. Specifically, for each topic, participants were shown a conservative tweet by a human, a conservative tweet generated by a bot, a liberal tweet by a human, and a liberal tweet generated by a bot. Once again, participants were asked to rate from one to seven how human or robotic they perceived the tweets to be.
On average, participants accurately identified bot-generated tweets as more bot-like than human-written tweets: bot tweets received an average score of 4.26, compared to 3.29 for human tweets. However, partisanship still had a significant effect, with Republicans seeing liberal tweets as bots more than Democrats and Democrats seeing conservative tweets as more bots than Republicans.
This provided important confirmation of what the researchers saw in the first experiment, says Schweitzer. “Bot political bias appeared for both bot and human tweets, meaning that people not only recognized real bots, but were also more likely to believe that verified humans were bots.”
Bot content discount
The first two experiments showed that partisan bias influences which social media posts look like bots. In the latest experiment, the researchers wanted to understand the consequences of this bias—in other words, what are the consequences of believing a post is from a human versus a bot?
A new group of 500 participants first answered various questions designed to measure their party affiliation. Democrats were then shown a tweet praising Republican Sen. Ted Cruz, while Republicans saw a tweet praising Democratic President Joe Biden. Critically, some participants were told that the tweet was written by a real person, while others were told that the same tweet was generated by a bot.
Participants then answered a series of questions about the tweets they saw, rating from one to seven the extent to which they believed the tweeter was capable of complex thinking, how seriously they thought the tweet should be taken, and how much thought that the tweet must be credible. Not surprisingly, tweets attributed to bots scored much lower on all three scales than those attributed to humans.
“When people think they’re interacting with a bot, they trust online conversation less and show less willingness to engage with it,” says Schweitzer. Combined with previous studies showing that people perceive opposing views as more bot-like, the results of this experiment “suggest that bot political bias can contribute to indicators of political polarization.”
The Dangers of Bots (and the Bot Hype)
To Schweitzer, the survey suggests just how deep America’s partisan divide is. “I’ve been surprised by how consistently this bias has emerged—people are very quick to dismiss other views as not only wrong but not even human,” he says.
The current conversation around bots may not help anyone, Waytz points out. The belief that bot accounts are widespread can have the effect of poisoning the well and making all online content appear suspicious. The research, he says, “makes me wonder if all the hype around bots might actually have more damaging or biased results than the bots themselves.”
But understanding the problem is critical. After all, “Twitter is not the only place where you can meet technology represented as a human. ChatGPT, deep fakes and robocalls are just a few examples of how human technology is reaching other parts of society,” says Schweitzer. “More research is needed to understand the human psychology of this increasingly technological world.”