In a new series by CBC Podcasts, hosted by 404 Media's Sam Cole, join journalists, investigators, and targets of non-consensual intimate images on the hunt for the worlds’ most prolific deepfake mastermind.

In a new series by CBC Podcasts, hosted by 404 Mediax27;s Sam Cole, join journalists, investigators, and targets of non-consensual intimate images on the hunt for the worlds’ most prolific deepfake mastermind.#Podcast #podcasts #cbc #Deepfakes


New Podcast Alert: The Globe-Spanning, Multi-Newsroom Hunt for Mr. Deepfakes


Mr. Deepfakes was the biggest website in the world for sharing AI-generated abuse imagery, swapping tips and tricks for more realistic results, and posting endless, fake, nonconsensual videos of everyone from celebrities to everyday people. In a new podcast by the CBC, I got to tell the tale of how deepfakes started, what targets go through, and where we go next.

It's called Understood: Deepfake Porn Empire. It's about the decades-long rise of non-consensual deepfake porn, the targets who are fighting back, and what it takes to stop its proliferation. Check it out here and listen wherever you get your podcasts.

The first three episodes are already up, so you can binge them all before the finale next Tuesday.

View this post on Instagram


A post shared by 404 Media (@404mediaco)


In the first episode, "The Dawn of Fake Porn," you’ll get a fascinating history of the decades of cultural and technological standards that set the stage for AI-generated nonconsensual imagery as we know it today. I learned a lot in this episode myself, including about a guy who went by “Lux Lucre” who ran two Usenet groups dedicated to fake nudes of celebrities in the 90s. This stuff goes so much farther back than you might realize.

In episode two, “So You’ve Been Deepfaked,” I got the chance to talk to Taylor, who discovered she’d been targeted by AI images while at university, working in a male-dominated field. Instead of hoping it’d go away, she set out to find her harasser, and found his other targets in the process. It all led back to one place: the biggest deepfake site in the world, Mr. Deepfakes.

Episode three just came out today: “The Notorious D.P.F.K.S.” is a romp through the investigative highs and lows that led a team of journalists scattered around the world to the door of Mr. Deepfakes himself. I was so thrilled to talk to investigative journalist Ida Herskind, OSINT specialist Zakaria Hameed, and Bellingcat’s Ross Higgins in this episode. Come for the How I Met Your Mother references, stay for the gripping chase.

Episode four, the series finale, launches next week. It’s a true crime story with CBC reporters on stakeouts and infiltrating hospitals, and legal and social experts breaking down what it all means now that we’re in a post-Mr. Deepfakes world—but far from a post-AI abuse landscape. Follow the Understood feed wherever you listen to get it when it comes out on Tuesday.

If you liked this season, head back to catch up on another series I hosted with the CBC: Pornhub Empire, on the rise and fall of the porn monolith.

Tune in and let me know what you think!


Kylie Brewer isn't unaccustomed to harassment online. But when people started using Grok-generated nudes of her on an OnlyFans account, it reached another level.

Kylie Brewer isnx27;t unaccustomed to harassment online. But when people started using Grok-generated nudes of her on an OnlyFans account, it reached another level.#AI #grok #Deepfakes

The media in this post is not displayed to visitors. To view it, please log in.

An analysis of how tools to make non-consensual sexually explicit deepfakes spread online, from the Institute for Strategic Dialogue, shows X and search engines surface these sites easily.#Deepfakes #Socialmedia


New Research Shows Deepfake Harassment Tools Spread on Social Media and Search Engines


A new analysis of synthetic intimate image abuse (SIIA) found that the tools for making non-consensual, sexually explicit deepfakes are easily discoverable all over social media and through simple searches on Google and Bing.

Research published by the counter-extremism organization Institute for Strategic Dialogue shows how tools for creating non-consensual deepfakes spread across the internet. They analyzed 31 websites for SIIA tools, and found that they received a combined 21 million visits a month, with up to four million visits in one month.

Chiara Puglielli and Anne Craanen, the authors of the research paper, used SimilarWeb to identify a common group of sites that shared content, audiences, keywords and referrals. They then used the social media monitoring tool Brandwatch to find mentions of those sites and tools on X, Reddit, Bluesky, YouTube, Tumblr, public pages on Instagram and Facebook, forums, blogs and review sites, according to the paper. “We found 410,592 total mentions of the keywords between 9 June 2020 and 3 July 2025, and used Brandwatch’s ability to separate mentions by source in order to find which sources hosted the highest volumes of mentions,” they wrote.

The easiest place to find SIIA tools was through simple web searches. “Searches on Google, Yahoo, and Bing all yielded at least one result leading the user to SIIA technology within the first 20 results when searching for ‘deepnude,’ ‘nudify,’ and ‘undress app,’” the authors wrote. Last year, 404 Media saw that Google was also advertising these apps in search results. But Bing surfaces the tools most readily: “In the case of Bing, the first results for all three searchers were SIIA tools.” These weren’t counting advertisements on the search engines that the websites would have paid for, but were organic search results surfaced by the engines’ crawlers and indexing.

X was another massively popular way these tools spread, they found: “Of 410,592 total mentions between June 2020 and July 2025, 289,660 were on X, accounting for more than 70 percent of all activity.” A lot of these were bots. “A large volume of traffic appeared to be inorganic, based on the repetitive style of the usernames, the uniformity of posts, and the uniformity of profile pictures,” Craanen told 404 Media. “Nevertheless, this activity remains concerning, as its volume is likely to attract new users to these tools, which can be employed for activities that are illegal in several contexts.”

One major spike in mentions of the tools on social media happened in early 2023 on Tumblr, when a woman posted about her experience being a target of sexual harassment from those very same tools. As targets of malicious deepfakes have said over and over again, the price of speaking up about one’s own harassment, or even objecting to the harassment of others, is the risk of drawing more attention and harassment to themselves.

‘I Want to Make You Immortal:’ How One Woman Confronted Her Deepfakes Harasser
“After discovering this content, I’m not going to lie… there are times it made me not want to be around any more either,” she said. “I literally felt buried.”
404 MediaSamantha Cole


Another spike on X in 2023 was likely the result of bot advertisements for a single SIIA tool, Craanen said, and the spike was a result of those bots launching. X has rules against “unwanted sexual conduct and graphic objectification” and “inauthentic media,” but the platform remains one of the most significant places where tools for making that content are disseminated and advertised.

Apps and sites for making malicious deepfakes have never been more common or easier to find. There have been several incidents where schoolchildren have used “undress” apps on their classmates, including last year when a Washington state high school was rocked by students using AI to take photos from other children’s Instagram accounts and “undress” around seven of their underage classmates, which police characterized as a possible sex crime against children. In 2023, police arrested two middle schoolers for allegedly creating and sharing AI-generated nude images of their 12 and 13 year old classmates, and police reports showed the preteens used an application to make the images.

A recent report from the Center for Democracy and Technology found that 40 percent of students and 29 percent of teachers said they know of an explicit deepfake depicting people associated with their school being shared in the past school year.

Laws About Deepfakes Can’t Leave Sex Workers Behind
As lawmakers propose federal laws about preventing or regulating nonconsensual AI generated images, they can’t forget that there are at least two people in every deepfake.
404 MediaSamantha Cole


The “Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks” (TAKE IT DOWN) Act, passed earlier this year, requires platforms to report and remove synthetic sexual abuse material, and after years of state-by-state legislation around deepfake harassment is the first federal-level law to attempt to confront the problem. But critics of that law have said it carries a serious risk of chilling legitimate speech online.

“The persistence and accessibility of SIIA tools highlight the limits of current platform moderation and legal frameworks in addressing this form of abuse. Relevant laws relating to takedowns are not yet in full effect across the jurisdictions analysed, so the impact of this legislation cannot yet be fully known,” the ISD authors wrote. “However, the years of public awareness and regulatory discussion around these tools, combined with the ease with which users can still discover, share and deploy these technologies suggests that takedowns cannot be the only tool used to counter their proliferation. Instead, effective mitigation requires interventions at multiple points in the SIIA life cycle—disrupting not only distribution but also discovery and demand. Stronger search engine safeguards, proactive content-blocking on major platforms, and coordinated international policies are essential to reducing the scale of harm.”