Washington: Deepfakes generated by artificial intelligence have proliferated on social media this month, claiming a string of high-profile victims and elevating the risks of manipulated media into the public conversation ahead of a looming US election cycle.
Pornographic images of singer Taylor Swift, robocalls of US President Joe Biden’s voice, and videos of dead children and teenagers detailing their own deaths all have gone viral – but not one of them was real.
Misleading audio and visuals created using artificial intelligence aren’t new, but recent advancements in AI technology have made them easier to create and harder to detect. The torrent of highly publicized incidents just weeks into 2024 has escalated concern about the technology among lawmakers and regular citizens.
“We are alarmed by the reports of the circulation of false images,” White House press secretary Karine Jean-Pierre said Friday. “We are going to do what we can to deal with this issue.”
At the same time, the spread of AI-generated fake content on social networks has offered a stress test for platforms’ ability to police them. On Wednesday, explicit AI-generated deepfaked images of Swift amassed tens of millions of views on X, the website formerly known as Twitter that is owned by Elon Musk.
Although sites like X have rules against sharing synthetic, manipulated content, the posts portraying Swift took hours to remove. One remained up for about 17 hours and had more than 45 million views, according to the Verge, a sign that these images can go viral long before action is taken to stop them.