“Humans are natural social learners. We’re constantly scanning the environment to understand what other people are doing and what we can learn from it,” says William Brady, assistant professor of management and organizations at Kellogg. “Social learning happens whenever we observe people, get feedback from them, imitate them, and incorporate that information into our understanding of the rules.”
Social media represents a new frontier for this type of learning. What happens when this all-important observation and imitation of others is mediated by algorithms controlled by tech companies whose goals are to keep people’s attention on platforms?
All kinds of problems, according to Brady, Joshua Conrad Jackson of the University of Chicago, Björn Lindström of the Karolinska Institute, and MJ Crockett of Princeton. In a new paper, present a context which describes the dangers of social learning in the digital age.
The researchers argue that the way the platform’s algorithms filter content interferes with the strategies people typically use for social learning, leading to false perceptions of the world and facilitating the spread of misinformation and extreme views. Fortunately, Brady also believes that tweaks to the algorithms will reduce these glitches while still providing engaging content for users.
Why we rely on PRIME information in social learning
When we’re in the throes of social learning, we use shortcuts to determine which information is most important. After all, we can’t take care of everything.
Instead, previous research suggests, we turn our attention to what Brady calls “PRIME” information: prestigious, group, moral, or emotional. Prestige information comes from someone who is considered successful, in-group information comes from a peer, moral information considers whether people behave according to shared moral rules, and affective information is emotionally charged (and often negative).
There are some very good reasons why people are biased towards watching PRIME information. For example, “it can be helpful to have a bias toward information we learn from successful people. They have some kind of knowledge that helped them succeed in their environment,” Brady explains. Biases toward in-group information are useful in helping people navigate their particular social environment and, more broadly, can facilitate cooperation. Biases toward moral and emotional information, respectively, can help stigmatize unethical behavior and social identity threats.
Crucially, however, PRIME information is most useful when it is both rare and diagnostic (meaning that the person who appears to be successful is actually successful, or the behavior labeled as unethical is in fact unethical).
This explains why the utility of PRIME information may break down in online environments, the researchers argue.
How come? In short, due to algorithmic amplification, we are inundated with PRIME information that is neither rare nor particularly diagnostic.
When PRIME information collapses
Social media algorithms are designed to maximize engagement: clicks, likes, time spent on the platform, etc. And because our brains are predisposed to see PRIME information as important—and therefore attractive—algorithms have learned over time to serve us a lot of it. As a result, there is an incentive for users to post in ways that appeal to our taste for prestige, in-group, moral and emotional content.
Often, this leads to outright falsification. Take a photo in what appears to be a private jet and you’ll look like you’ve acquired prestige and wealth (even if it’s just a $64 an hour photo studio).
On the internet, then, prestige and success are not as closely linked as they are in most real-world environments. And when algorithms reinforce social media influencers who have “faked” success with a highly polished presentation and other users learn from their words or actions, the functionality of the bias towards authoritative sources of information is broken.
This same analysis can occur with other types of PRIME information. Social learning’s bias toward in-group information has historically fostered cooperation and understanding within a community—to get along, it helps to have a shared set of rules.
But online, it can play a more divisive role in how people perceive social norms and politics. It is easy for in-group information to encourage groupthink and ultimately extremism. And when social media users regularly see extreme views accompanied by lots of likes, they may begin to believe the view is more common than it is.
“Get the right context and the right people, add algorithmic reinforcement, and it can distort social learning in ways that make extreme views seem more legitimate and widespread,” says Brady.
Brady sees the January 6 riot as a PRIME example. “How does a fringe right-wing view gain legitimacy and get a critical mass of people to organize and storm Capitol Hill?” Brady asks. “People wouldn’t have seen these views if they weren’t boosted by algorithms, and that’s because they’re getting engagement. Algorithms put fringe views into public discourse and allow people to organize around them.”
PRIME online information can also make people think the country is more polarized than it is. Most partisans greatly overestimate how far their views differ from those on the other side of the political spectrum, and social media interactions are one source of this misunderstanding.
When platforms expose users to information about their political out-group—those who don’t share their political leanings—the posts they show are often extreme and colored by comments from their in-group politics. And this comment is often negative, moral and emotional.
“This is exactly the type of information that algorithms amplify. People see a skewed portrayal of the other side,” says Brady.
How ‘limited differentiation’ could change social media for the better
Thus, binding-based algorithms and PRIME information create a conflicting, unhelpful combination. Can the problem be solved?
One way would be for platforms to show social media users a wider range of opinions. However, this approach could have unintended consequences, such as exposing moderate people to extreme views – not necessarily improvement.
Instead, Brady and colleagues suggest two alternatives. One is to increase the transparency of social media algorithms. Simply telling users why they’re seeing a particular post—because a close friend shared it or because the platform thought it would be engaging, for example—would help users understand how the technology works and think more deeply about what they’re consuming online .
The researchers call the other solution “constrained differentiation.” This approach includes algorithm adjustments to limit the amount of PRIME information that users see.
Today, algorithms rank content relevance and show users the posts most likely to increase engagement. As we’ve seen, this introduces too much PRIME information into the mix. Limited differentiation will introduce a penalty for PRIME information so that algorithms will rank this type of content lower and show it to users less often.
Non-PRIME information that remains in the mix will still be included in the algorithm identified as likely to attract users. Depending on the user, this could be funny memes, historical photos, or cute puppy videos—content that would still get attention, but is less likely to cause inflammation.
Brady sees this change as platforms might. “At the end of the day, we know social media companies want people to engage because that’s how they make money. They have to keep their platforms running,” he says. “So we recommend this approach because it would still give people content they find interesting, but it wouldn’t be so dominated by PRIME information.”