There have been countless high-profile political “deepfakes” making the rounds on social media, including one in September that involved Florida Gov. Ron DeSandis dropping out of the presidential race. There have also been repeated warnings that the technology would become harder to identify and even harder to stop.
However, not much has been done to address the issue until this week, when a deepfake saying Taylor Swift went viral, attracting attention from lawmakers and the masses alike. The White House even responded and called on Congress to take action.
“We are concerned by the reports of images being released… fake images to be exact. And it’s troubling,” White House press secretary Karin Jean-Pierre said during Friday’s White House press conference. “So while social media companies make their own independent decisions about content management, we believe they have an important role to play in enforcing their own rules to prevent the spread of misinformation and non-consensual, intimate portrayals of real people.”
It was also on Friday that the Screen Actors Guild-American Federation of Television and Radio Artists called for deepfakes to be made illegal after the technology was recently used to create clear images of Taylor Swift as well as a new comedy featuring the late George. Carlin.
“Sexually explicit, AI-generated images depicting Taylor Swift are disturbing, harmful and deeply disturbing,” SAG-AFTRA said in a statement.
Will there be quick action?
Deepfakes may have been seen as a serious problem, but now that pop superstar Taylor Swift is involved, it looks like swift action may finally be taken to tackle it.
“Taylor Swift’s deepfakes are obviously damaging to her brand, if not evidence of copyright theft. After all, a deepfake of high quality – and I hate to use the word ‘quality’ in relation to the images that just released “It’s based on a lot of original images that definitely belonged to others. These images are grist for the AI mill that grinds them until they tell a story of someone else’s choosing,” explained Dr. Jim Portillo, associate professor of computer science at the University of Maryland .
However, Purtilo warned that we are still only seeing the beginning of the potential dangers of deepfakes.
“Public figures will soon be endorsing political candidates, promoting schlock products and enabling fraud — or so it seems,” he added. “With enough computing power, any malicious person who can get their hands on enough original images can construct all kinds of fraudulent means to deceive the public. And nothing I’ve seen so far from the proposed regulation will have any impact on that the trend. since computing will only become more accessible, affordable and easy to use.”
More than edited photos
Ever since technologies were developed, there has been manipulation of photos, films and videos. A famous photo of then-Union general Ulysses S. Grant on horseback was actually among the first “edited” photos—where his head was placed on another officer’s body.
What’s different now is how easily we create deepfakes. Virtually no skills required.
“We’ve been here before. In the early 2000s, the influx of rapidly evolving photo editing tools, web distribution services, and commercial porn sites resulted in images of popular artists such as Britany Spears and Madonna being “sold” X-rated. Testing similar tactics with video was technically problematic, but today’s deepfakes allow porn producers to create seemingly realistic images from whole cloth,” noted tech industry analyst Charles King of Pund-IT.
“I’m not sure there’s a way out of this. On one level, AI-related issues that arose during the recent Screen Actors Guild (SAG) strike over legislation that allowed artists to ‘own’ physical images and their voices, would provide a legal means to challenge and limit the creation of deepfakes,” King added.
However, even those affected should track down those responsible for creating the fakes.
“Additionally, we’re already seeing Meta, X and other social media companies claim to be ‘working to remove’ Swift-style deep fakes,” King continued. “We’ve heard this song from these companies before, but Swift’s public status, clean media persona, and large financial assets would make her a formidable opponent if she chose to sue them for allowing the posting, circulation and sharing false, defamatory images. .”
Platforms that host such content may need to be held responsible, if only for a little while.
“Hitting social media, the Internet and other tech giants in the pocket isn’t always successful. But it’s usually the best way to get their attention and change bad behaviors,” King said.
Beyond a public image
The stark images posted on X, the social media platform formerly known as Twitter, garnered more than 27 million views in less than 24 hours after they were posted. There is certainly a risk that if she fails to take action, she could damage her brand, just as the late George Carlin’s AI-generated “comedy special” could tarnish his legacy.
However, the biggest threat from deepfakes may be that it could affect average Americans, who don’t have strong guilds to protect them.
“Our biggest problems in the future will come from those using this technology to facilitate phishing and identity theft when the tools are good enough to create realistic fakes depicting non-public figures that have less original means for use in education Purtilo added.