The ubiquity of social media applications makes them fertile ground for researching the dynamics of modern life. Kellogg faculty explored the good and the bad of social media—from misinformation and bias to business models and influencer marketing—and how those characteristics reflect and shape the offline world.
1. Politics is everywhere… except your phone
In the weeks leading up to an election, it often feels like we can’t escape the political news. But Kellogg’s Guy Aridor and their colleagues found that most people’s smartphones are immune to politics—even during a US presidential race.
The researchers used an app that tracks how often certain keywords appear in users’ smartphone apps, including email, messaging, social media, news, music and video streaming, and web browsers. From a list of more than 500 election-related terms and politicians’ names, they found that the average person encountered only 13 of the keywords on a typical day in the fall of 2024.
That’s less than half the terms one would see reading just one news article about the election. And only 5 percent of people in the study were exposed to political terms from news apps or websites.
“Other than the very politically engaged, people don’t turn on the news at all,” says Aridor. “They receive campaign exposure sporadically throughout the day. So the low engagement we find is not due to heavy periods of political news consumption.”
The researchers further found that this low exposure was not due to social media platforms algorithmically suppressing political content. Instead, it’s largely determined by users’ personal preferences about what content they choose to watch — something that’s rarely political.
“Everyone has their own little world about the types of content they watch,” says Aridor.
2. The vicious cycle of misinformation and outrage
Misinformation can spread like wildfire on social media, despite efforts by platforms and regulators to moderate and flag inaccurate content. Research by Kellogg’s William Brady and colleagues found one reason why misinformation is so combustible online: rage.
In ten studies that analyzed more than a million social media posts, researchers found that misinformation was more likely to cause outrage than credible news.
Anger, in turn, leads people to share these often misleading posts on social media. And it makes them more willing to do so without actually reading the article linked in the post, even if they are otherwise good at spotting misinformation.
“We actually find that people are not terrible at distinguishing between misinformation and credible news,” says Brady. “But here’s the key: If you give them an outrage article with misinformation, that ability they have goes out the window; they’re more likely to share it anyway.”
Even people who disagree with the misinformation they see on social media may unwittingly help promote the content simply by engaging with it, Brady notes.
“The disinformation ecosystem isn’t just driven by user behavior; it’s also driven by algorithms,” he says. “When you engage in misinformation — even in arguments — you’re actually helping to increase misinformation in the ecosystem, because you’re letting the algorithm know that it’s engaging.”
3. Influencers, be careful with indulgence
Because of their ability to attract engagement, influencers are big in the social media and marketing world. But while the common perception of influencers is that people flaunt their expensive purchases and carefree lifestyles, research shows that this behavior can backfire.
A study by Professor Kellogg Maferima Touré-Tillery and graduate student Jessica Gamlin found that people are less likely to connect with content creators who post about self-indulgent behavior, for example, eating junk food or watching TV.
The researchers called this phenomenon the “bad influence effect.”
Influencers using hashtags like #selfcontrol and #willpower had, on average, a significantly higher number of followers than those using hashtags like #indulge and #indulgence. And users were less likely to follow accounts that posted about watching TV before bed or accounts that used curse words.
“If I have a goal to be productive today and I pass someone who’s talking about binge-watching Netflix, I’m probably not going to connect with them,” Gamlin says. “Instead, I will avoid social contact with them to try to get back to my goal of being productive.”
During their studies, the researchers found evidence of a strong correlation between a participant’s commitment to a personal goal and their willingness to connect with a content creator.
“Naturally, people seem to be more open to advice and recommendations from those with whom they feel connected, whether that feeling of connection is mutual or only one-sided,” says Touré-Tillery.
4. “Follow-back” is not color blind
“Follow-back” – where a user follows an account that recently followed them – is a common practice in social media, allowing people to make new connections and expand their network. And while it’s often the result of a split-second decision, it can still reveal unconscious biases.
This is Kellogg’s Mariam Kouhaki he found out when he joined a team of researchers looking at how race and politics affect the likelihood of a sequel. The team created 18 X profiles, split evenly between black and white identities and liberal, conservative, or neutral political beliefs. Over the course of two weeks, they used these accounts to follow 6,000 accounts on X.
When they looked at follower frequency, they found that race played a role. People were 24 percent less likely to follow accounts owned by blacks than whites, and the pattern held whether the people making the decision were conservative or liberal.
“Even with extremely liberal users, you still find racial bias,” Koutsaki says.
The researchers suggest that this result is an example of automatic thought processes that arise in situations where the motivation to appear unbiased is low. People, regardless of their political leanings, may be subject to greater bias during these actions.
“While previous research shows that liberals are more likely to report themselves as less racist and care about these issues, this study shows that when they make a quick, gut decision where they don’t know they’re being watched, that’s not true,” says Koutsaki.
5. When paid subscriptions are better than free ones
Many of the biggest social media platforms built their user base by offering their product for free. However, growing concern among users and regulators about data privacy has led some platforms to shift to a subscription model where less user data is collected or sold, or to a hybrid of paid and free options.
Kellogg’s Sarit Markovic and a colleague built a model to weigh these options and determine when each business model makes the most sense.
They found that when the value of the data a platform collects from its users and the importance of network effects to the platform’s capabilities is strong, offering an app for free makes sense. But if any of these factors are weak, a subscription model is the smartest play.
“[Companies] they should always be thinking about the commercial value of their data and the power of their network effects,” says Markovich. “If they change, their strategy will have to change as well.”
The model also offers insights into how policymakers can better regulate platforms and protect consumers. Discouraging hybrid “pay-for-privacy” models—as some European agencies have attempted—would likely reduce competition, raise prices, and incentivize more aggressive data collection.
“Regulators should recognize that there is no one-size-fits-all solution,” says Markovich. “Protecting privacy doesn’t always require ‘big gun’ tools like banning entire business models. In many cases, simpler measures like price caps can better protect privacy. By limiting subscription prices, you can make privacy choice more affordable and accessible to users.”
