Generative AI, then, presents users with a fundamental trade-off: to maximize its performance benefits, you must sacrifice some of the product’s “fidelity”—that is, how faithfully or exactly it adheres to your unique style or perspective.
“If the whole point is to work faster and increase productivity, then you have to give up something for the sake of speed somewhere,” says Sébastien Martin, assistant professor of managerial economics and decision sciences at Kellogg. .
For individuals, this trade-off can be burdensome, but at least it’s simple. Either accept the AI-generated output as “good enough” or spend more time customizing it, perhaps by providing more information up front, tweaking the prompt, or tweaking the output afterwards. People with a particularly distinct style or perspective may even decide that personalizing the AI is more trouble than it’s worth and abandon the tool altogether.
But what happens when everyone starts using these tools? Does the speed-fidelity trade-off have broader societal consequences, both short-term and long-term?
In new research with his colleagues Francisco Castro and Jian Gao of UCLA, Martin finds that using artificial intelligence to create content will increase the homogeneity of what we collectively produce, even if we try to personalize the output. Additionally, this content will also inherit any biases the AI may have acquired during its training process. In other words, the tastes and biases of a few workers at AI companies may eventually permeate society. The research also finds that these results will worsen as AI-generated output is used to train the next generation of AI.
On a more positive note, however, the study suggests that creating interactive AI tools that encourage user input and facilitate manual edits can prevent the worst of these outcomes.
Opportunities and risks
Martin is well aware of the speed-fidelity trade-off inherent in genetic AI. He is, moreover, a native French speaker who regularly relies on Grammarly to improve his written English. “I save a lot of time!” He says.
However, Martin acknowledges that using the tool inevitably shapes the articles he writes. There’s nothing particularly wrong with that: idiosyncratic preferences around punctuation, word choice and sentence structure abound. “What Grammarly will write will not be exactly what I would write,” he says. “There are different ways to write the same thing. Sometimes it’s just a matter of taste.”
But other times, the differences are more substantial. Whether we describe an event as a “protest” or a “riot,” or write a news article from a center-right versus center-left perspective, it can significantly shape the public’s impression of what happened. And more generally, what happens over time when everyone’s collective taste is influenced by the same algorithms?
To find out, Martin, Castro and Gao built a mathematical model to simulate the consequences of an entire society using the same artificial intelligence tools.
In their model, users with a range of different preferences use AI to work on a given task and can choose to personalize the output as much as they like. This personalization is represented as an exchange of information about each user’s preferences. Users decide how much effort to spend, depending on their particular situation: sharing more information means the AI will do a better job of capturing unique preferences, but more work is also needed. Sharing less information is quicker and easier, but produces a more general effect. Even the best AI is unable to guess a user’s true preferences if that user shares limited information, but it can make an educated guess as it learns the variety of preferences in the population at large during its training.
How would individual users decide to use these tools, the researchers wondered, and what would their choices mean overall?
Inspired by algorithms
The model confirmed that, for users with the most common preferences or preferences in the middle, the optimal decision was to accept the AI output as is. However, users with less common preferences will be incentivized to share additional information with the AI or edit the output themselves to move it away from the default. Meanwhile, for users with fringe preferences, AI didn’t save any time: these users were better off creating the content themselves.
The model also found that AI-generated content is always more homogeneous than user-generated content. This is true at the individual level, where the benefits of AI come from replacing some part of our own eclectic preferences with more popular ones. But it also applies at the population level. The range of preferences expressed in AI-generated content was less variable than the range of preferences in the population—an effect reinforced because these users with fringe tastes simply did not use the tool at all.
Additionally, uniformity is compounded over time, as AI-generated content is then used to train the next generation of AI. This creates what researchers call a “death spiral” of homogenization. The new AI is trained on more homogenized data and is therefore more likely to generate homogenized content. Users then need more time and effort to tweak the AI output to suit their preferences, which they may not be willing to do, leading to even more homogenization.
Another problem with artificial intelligence – bias – will also worsen over time, the model suggests. Since most AI is created and trained by a limited number of people (a typical approach is RLHF or Reinforcement Learning with Human Feedback), it’s almost inevitable that some bias will creep into the initial AI outputs. Users can correct this bias with a little effort, but if the bias is small enough, it may not be worth the effort for many—or they may not even notice it.
But this understandable individual behavior is compounded if we all act in a similar way. Over time, “any AI bias can really turn into a societal bias,” says Martin.
This allows AI companies to have enormous influence over social output, even if users do their best to limit it.
A way forward
There are ways to mitigate societal problems associated with artificial intelligence, researchers find. One of the most promising is getting more people to interact with it and curate the work themselves. Homogeneity and bias will not be rampant as long as the model’s output is able to reflect users’ true preferences—which means that users must actually make those preferences clear.
In practice, this might mean that an AI asks users a few questions before generating a result, to get a better sense of their unique style or perspective. Or it can mean providing multiple outputs.
“Instead of giving you one version, try to give you two very contrasting versions that will allow you to choose between them,” suggests Martin.
He acknowledges that these proposals will slow users down in the short term—making the technology slightly less useful. But in the long run, this strategy “would definitely be a very good thing”—for both users and AI tools.
Martin remains largely optimistic about the role genetic AI can play in content creation — as long as it continues to reflect the full range of human preferences. Indeed, making creation more accessible to a multitude of new writers, artists, or coders could have benefits.
“AI can also bring more people into something they couldn’t do before,” he says, “which could potentially add some diversity.”