The Prompt is a weekly roundup of AI’s hottest startups, biggest breakthroughs and business deals. To get it in your inbox, register here.
Welcome back to The Prompt.
Another AI startup is being (partially) swallowed by a tech giant.
On Friday, Amazon was announced that it is hiring the co-founders and about a quarter of the employees of robotics AI company Covariant. The e-commerce giant has also obtained a non-exclusive license to the company’s artificial intelligence models, which it plans to incorporate into its fleet of industrial robots. Founded in 2017, Covariant has raised more than $240 million in funding from backers such as Index Ventures and Radical Ventures.
The announcement comes as similar deals have taken place in recent months with major tech companies hiring founders and teams of buzzy AI startups like Inflection, Adept and Character AI.
Now let’s get into the headlines.
ETHICS + LAW
Facial recognition company Clearview AI has been fined $30 million by a Netherlands-based privacy authority for scraping billions of images of people from the internet without their knowledge or consent and making a “illegal database” of photos. Clearview’s chief legal officer, Jack Mulcaire, said the company had no customers in the EU and that the decision was “illegal”. The company’s facial recognition tools were used by law enforcement agencies in hundreds of cases of child exploitation, Forbes reported last year.
Two voice actors, Karissa Vacker and Mark Boyett, have sued the AI voice production startup ElevenLabs, for attribution hours of copyrighted audiobook narrations produce customized synthetic voices that sound similar to their own and train them fundamental AI model with the recordings. According to the filing, the company removed one of the AI-generated voices from its platform last year after the actor reached out, but for months it was unable to remove the voice from its API because “Technical Challenge” which allowed other websites to copy the voice. The company did not respond Forbes’ request for comment.
POLITICS + ELECTION
Two convicted fraudsters and conspiracy theorists Jacob Wohl and Jack Burkman used fake names is secretly launching an AI lobbying firm called LobbyMatic, Politico was mentioned. The duo too falsely claimed in screenshots demonstrating that companies such as Microsoft, Pfizer and Palantir have used the AI platform to generate insights and analyze legislation, according to 404 Media. Late last year, the company also created a fake profile for blogging on Medium.
AI DEAL OF THE WEEK
ChatGPT Maker OpenAI is in talks to raise several billion dollars in a round that would appreciate the AI behemoth 100 billion dollarsthe Wall Street Journal reported last week. Investment company Thrive Capital, founded by a billionaire Josh Kushnerleads the round and plans to inject $1 billion into the company. The tech giants like Apple, Nvidia and Microsoft is also according to information participating in the round.
Also of note: AI coding startup Codeium, which appeared on the Next Billion Dollar Startups list in August, raised $150 million at a $1.25 billion valuation.
DEEP DIVING
For many children visiting Disney World in Orlando, Florida, it was the trip of a lifetime. For the man who shot them on a GoPro, it was something more sinister: an opportunity to create images of child exploitation.
The man, Justin Culmo, who was arrested in mid-2023, admitted to creating thousands of illegal images of children taken at amusement parks and at least one high school using a version of the AI Stable Diffusion model, according to federal agents who presented the case to a group of law enforcement officials in Australia earlier this month. Forbes obtained details of the presentation from a source close to the investigation.
Culmo has been charged with a number of child exploitation crimes in Florida, including allegations that he abused his two daughters, secretly filmed minors and distributed child sexual abuse images (CSAM) on the dark web. He has not been charged with producing AI CSAM, which is a crime under US law. At the time of publication, his lawyers had not responded to requests for comment. He pleaded not guilty last year. A jury trial is set for October.
“This is not just a flagrant invasion of privacy, it’s a targeted attack on the safety of children in our communities,” said Jim Cole, a former Department of Homeland Security agent who monitored the defendant’s online activities during the 25 years that was exploited as a child. investigator. “This case strongly highlights the ruthless exploitation that artificial intelligence can enable when manipulated by someone with intent to harm.”
The case is one of a growing number where artificial intelligence is used to turn photos of real children into realistic images of abuse. In August, the Justice Department unsealed charges against trooper Seth Herrera, accusing him of using artificial intelligence tools to produce sexual images of children. Earlier this year, Forbes reported that Steven Andereg, a resident of Wisconsin, was accused of using Stable Diffusion to produce CSAM from images of children requested via Instagram. In July, the UK-based non-profit Internet Watch Foundation (IWF) said it had identified over 3,500 AI CSAM images online this year.
Read the full story at Forbes.
WEEKLY DEMO
AI generated reviews with five star ratings hectare flooding mobile and smart TV app storesaccording to media transparency company DoubleVerify, making it harder to decide which apps are worth downloading. Fraudsters use artificial intelligence tools to give high ratings to fraudulent apps that constantly show ads—even when the phone may be turned off—to earn revenue. But some indicative signs, such as e.g unusual formatting and similar writing styles in different reviews, can help you spot fake app reviews.
AI INDEX
200 million
People use ChatGPT at least once a week, OpenAI said. This is double the number of users it had announced last November.
STANDARD BEHAVIOR
An AI assistant recently called Lindy AI “rickrolled“a human customer when asked to provide a video tutorial on how to set up the assistant. In an email response, the chatbot was hallucinating and played the prankdirecting the customer to the music video for Rick Astley’s 1987 song, “Never Gonna Give You Up.”