Will the Taylor Swift AI deepfakes finally make governments take action?
Sam Cole and Melissa Heikkilä talk about why this story has caught people’s attention
Last week, Artificial Intelligence (AI) generated explicit photos of Taylor Swift were shared on X, formerly known as Twitter.
These non-consensual AI-generated pornographic photos were viewed millions of times before being taken down. For 48 hours, Swift's name was unsearchable on X to prevent more of these deepfakes being shared.
These AI images appear to have originated on private instant messaging service Telegram, with someone then leaking them on social media.
In 2019, a study by DeepTrace Labs, an Amsterdam-based cybersecurity company, found that 96 per cent of deepfake video content online was non-consenting pornographic material.
Sam Cole, a journalist at 404 Media and Melissa Heikkilä, senior reporter at MIT Technology Review, join Elamin to explain why this story has hit a nerve with those in Hollywood and Washington.
We've included some highlights below, edited for length and clarity. For the full discussion, listen and follow the Commotion with Elamin Abdelmahmoud podcast on your favourite podcast player.
LISTEN | Today's episode on YouTube:
Elamin: Last week, the conversation around deepfakes finally seemed to explode in a new way because explicit AI-generated pornographic images using Taylor Swift's likeness were shared on X, a.k.a Twitter, and they went viral. Her fans reported the deepfakes. Then came an onslaught of reactions. First, the Hollywood actor's union condemned the images, then the White House felt compelled to weigh in.
We saw how these deepfakes of Taylor Swift went viral. But what you've been up to is investigating where they actually came from. What have you found?
Sam: What we found was people in this Telegram group, which is an encrypted chat app, were sharing images of Taylor Swift, and then that's where they basically kicked off from. They launched from this Telegram group, and people in that group were using Microsoft's designer tool, which is like a text to image generative AI tool to create these images. So you just type in what you want to see and then it brings it up on the screen. They were getting around some of the guardrails that Microsoft already had in place by using pretty simple prompts that brought up images of celebrities.
Elamin: What's alarming about that is that what you're telling me is in these Telegram groups, the images are being made all the time. We just know about these because someone shared them on social media. And they typically don't do that, right?
Sam: Exactly. The people in the group were kind of talking amongst themselves and saying, you know, who put these on Twitter? They don't necessarily usually use Twitter to share these because they can do it on Telegram without going unnoticed because it's an encrypted chat app.
Elamin: Melissa, let's talk about deepfakes in general, because as I mentioned, deepfakes and not consenting images, that's not a new problem. Can you contextualise how far back this goes and how it escalated to this point?
Melissa: They've been around from like 2017. The original use for deepfakes was just women's faces. And back in the day, you had to have some technical know-how, like a pretty good computer, 20, 30 good photos of whoever you wanted to deepfake. It was really hard and it took a lot of time.
But now with generative AI, it's become super easy and super cheap. You need one photo from someone's Instagram and you can generate something really passable, even in a video format, which in the past has been really hard.
Elamin: The Verge reported that one of the deepfakes was viewed more than 45 million times. It stayed up for 17 hours before X removed it.
Melissa, the last time we had you on the show, we talked about deepfakes and we talked about AI-generated imagery in pornography. How does it make you feel that the fake images of Taylor Swift is the thing that pushed this issue to the foreground?
Melissa: You know what? I'll take anything at this point. It's been going on for so long and it's so serious. If this is the moment where we can actually see some change. Fantastic. It really breaks my heart to see cases in Europe and in the United States where actual children, like teenagers have had to face deepfake nudes of themselves and that sort of interested authorities, but not to the same level as this. So this really feels like a moment we could do something.
You can listen to the full discussion from today's show on CBC Listen or on our podcast, Commotion with Elamin Abdelmahmoud, available wherever you get your podcasts.
Panel produced by Jane van Koeverden