How explicit deepfake photos of Taylor Swift are raising concerns about AI regulation

The exploitation of generative AI tools to create potentially harmful content targeting all types of public figures is increasing quickly

Complaints have erupted over regulation of social media platforms and content generated by artificial intelligence (AI) after explicit fake photographs of American pop star Taylor Swift were circulated online.

The images of Swift, thought to be created by AI, were predominantly posted to X, previously known as Twitter last week.

The platform has since said it is “actively removing all identified images” and taking “appropriate actions” against the accounts responsible for posting them as the platform has a “zero-tolerance policy” towards such content.

Here, ITV News explains why the photos are raising concerns over the extent of AI moderation.

Do social media platforms have measures in place to moderate sharing of AI content?

The exploitation of generative AI tools to create potentially harmful content targeting all types of public figures is increasing quickly and spreading faster than ever across social media.

Over the weekend, in an effort to control the spread of the deepfake images, attempts to search for Swift's name without quote marks on the site resulted in an error message.

“This is a temporary action and done with an abundance of caution as we prioritize safety on this issue,” Joe Benarroch, head of business operations at X, said in a statement.

But speaking more generally, Ben Decker, who runs the digital investigations agency Memetica, said online companies "don’t really have effective plans in place to necessarily monitor the content.

“This is a prime example of the ways in which AI is being unleashed for a lot of nefarious reasons without enough guardrails in place to protect the public square,” he added in reference to Swift.

Taylor Swift is yet to comment publicly on the explicit AI images that circulated of her online. Credit: AP

On X's website it states: “You may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm (’misleading media’).”

But CNN reports that X has largely gutted its content moderation team and relies on automated systems and user reporting.

Meta, too, made cuts to its teams that tackle disinformation and coordinated troll and harassment campaigns on its platforms, people with direct knowledge of the situation told CNN.

But both Meta - the parent company of Facebook and Instagram - and TikTok require users to label content generated or edited with AI.

None of the companies have made public what the response would be to violations of these rules.

For more arts and entertainment news, listen to our podcast Unscripted...

What are the UK and US governments doing to prevent the spread of explicit AI content?Neither the UK nor US government currently have specific regulations in place for AI on social media.

However, Sunak's government approved the Online Safety Bill last year which aimed to introduce rules to social media and other user-generated content-based sites that compel them to remove illegal material from their platforms, with a particular emphasis on protecting children from seeing harmful content.

Firms that break these rules would face large fines from the sector’s new regulator, Ofcom.

The bill currently fails to mention AI definitively.

Meanwhile, in the wake of the Swift deepfake images, US politicians have called for new laws to moderate AI use.

US congressman Joe Morelle is one of several politicians who have condemned the deepfake photos. Credit: AP

US congressman Joe Morelle described the fake pictures as “appalling”.

“The spread of AI-generated explicit images of Taylor Swift is appalling and sadly, it’s happening to women everywhere, every day,” he said on X.

“It’s sexual exploitation, and I’m fighting to make it a federal crime with my legislation: the Preventing Deepfakes of Intimate Images Act.”

Meanwhile congresswoman Yvette Clarke said what has happened to Swift is “nothing new”.

“For years, women have been targets of deepfakes without their consent. And with advancements in AI, creating deepfakes is easier and cheaper,” she said on X.

“This is an issue both sides of the aisle and even Swifties should be able to come together to solve.”

Want a quick and expert briefing on the biggest news stories? Listen to our latest podcasts to find out What You Need To Know...