As has been well documented, because these models are calibrated to generate original content, inaccurate results are inevitable — even expected. So these tools also make it much easier to manufacture and spread disinformation online.
The speed of these advances has some very knowledgeable people nervous. In a recent I open a letterTech experts including Elon Musk and Steve Wozniak have called for a six-month pause in AI development, asking:must are we letting the machines flood our information channels with propaganda and untruth?’
No matter how fast these AI systems progress, we’ll have to get used to a lot of misinformation, he says William Bradyprofessor of Management and Organization at the Kellogg School, who researches online social interactions.
“We have to be careful not to get distracted by science fiction and focus on specific risks that are the most pressing,” says Brady.
Below, Brady dispels common fears about AI-generated misinformation and clarifies how we can learn to live with it.
Misinformation is a bigger issue than misinformation
Brady is quick to distinguish between misinformation and disinformation. Misinformation is content that is misleading or unsubstantiated, while misinformation is created and deployed strategically to deceive the public.
Knowing the difference between the two is the first step in assessing your risks, Brady says. Most of the time, when people worry about the potential harm of genetic AI, they fear that it will lead to a spike in misinformationnotes.
“One obvious use of LLMs is that it’s easier than ever to create things like disinformation and deepfakesBrady says. “This is a problem that could increase with the advent of LLMs, but it’s really only one component of a larger problem.”
Although deepfakes are problematic because they are intended to mislead, their overall penetration rate in overall online content is low.
“There is no doubt that genetic artificial intelligence will be one of the main sources of political disinformation,” he says. “Whether or not it will really have a huge impact is more of an open question.”
Brady says the problems are more likely to occur in the perpetuation, rather than the ease of production, of misinformation. LLMs are trained to recognize patterns from scanning billions of documents, so they can exude authority while lacking the ability to tell when they’re off base. Issues arise when people are misled by small errors produced by AI and trust it as they would information coming from an expert.
“LLMs learn how to sound confident without necessarily being precise,” says Brady.
Does this mean that misinformation will grow exponentially the more we use genetic AI? Brady says we can’t be sure. It shows you research This shows that misinformation on social media platforms is small—some estimate it to be as low as 1-2 percent of the entire information ecosystem.
“The idea that ChatGPT will suddenly make disinformation a bigger problem just because it increases supply is not empirically validated,” he says. “The psychological factors that lead to the spread of misinformation are more of a problem than a supply.”
The problem behind the misinformation is us too
Brady believes that perhaps the biggest challenge with disinformation is not just the limitless advancement of technology, but also our own psychological tendency to believe machines. Brady says we may not be willing to discount AI-generated content because it’s cognitively taxing to do so. In other words, if we don’t take the time and effort to critically look at what we read online, we become more prone to believe misinformation.
“Humans have an ‘automation bias,’ where we assume that information generated by computer models is more accurate than if humans generated it,” says Brady. As a result, people are generally less skeptical of AI-generated content.
Misinformation spreads when it resonates with people and they share it without questioning its truth. Brady says people need to become more aware of the ways in which they unknowingly help create and spread misinformation. Brady calls it a “pollution problem.”
“The problem with misinformation tends to be on the consumer side — the way people share it socially and draw conclusions — more than they produce their own messages,” says Brady. “People believe what they read and expand on it as they share. Indeed, they have been misled and are reinforcing it.”
Educate yourself
Realistically, we can’t wait for regulatory oversight or company controls to curb misinformation, Brady says. they will have little effect on how content is created and distributed. Since we can’t chase the exponential growth of AI, Brady says we need to learn how to be better aware of when and where we might encounter misinformation.
In an ideal world, corporations would have an important role to play.
“Companies need to make responsible interventions based on the products they put out there,” he says. “If the content is flagged, at least people can decide if it’s credible or not.”
Brady envisions something similar being applied to disinformation and genetic artificial intelligence. It is in favor of online platforms that help users spot misinformation by flagging content generated by artificial intelligence. But he knows there’s no point waiting for tech companies to develop effective controls.
“Companies aren’t always motivated to do all the things they should, and that’s not going to change in our current setup,” he says. “But we can also empower individuals.”
Making users develop situational awareness of the most common scenarios when disinformation is likely to occur can make disinformation less likely to spread.
From long ago adults tend be the most vulnerable to misinformation, providing basic education such as digital literacy training videos or an online site for adults who are not as internet savvy as their Gen-Z counterparts could go a long way in eliminating online misinformation . These efforts could include developing public awareness campaigns to make them aware of the role algorithms play in over-promoting certain types of content, including extreme political content.
“It’s about educating people about the contexts in which you might be more susceptible to misinformation.”