Salta al contenuto principale


New Research Shows Deepfake Harassment Tools Spread on Social Media and Search Engines


An analysis of how tools to make non-consensual sexually explicit deepfakes spread online, from the Institute for Strategic Dialogue, shows X and search engines surface these sites easily.

A new analysis of synthetic intimate image abuse (SIIA) found that the tools for making non-consensual, sexually explicit deepfakes are easily discoverable all over social media and through simple searches on Google and Bing.

Research published by the counter-extremism organization Institute for Strategic Dialogue shows how tools for creating non-consensual deepfakes spread across the internet. They analyzed 31 websites for SIIA tools, and found that they received a combined 21 million visits a month, with up to four million visits in one month.

Chiara Puglielli and Anne Craanen, the authors of the research paper, used SimilarWeb to identify a common group of sites that shared content, audiences, keywords and referrals. They then used the social media monitoring tool Brandwatch to find mentions of those sites and tools on X, Reddit, Bluesky, YouTube, Tumblr, public pages on Instagram and Facebook, forums, blogs and review sites, according to the paper. “We found 410,592 total mentions of the keywords between 9 June 2020 and 3 July 2025, and used Brandwatch’s ability to separate mentions by source in order to find which sources hosted the highest volumes of mentions,” they wrote.

The easiest place to find SIIA tools was through simple web searches. “Searches on Google, Yahoo, and Bing all yielded at least one result leading the user to SIIA technology within the first 20 results when searching for ‘deepnude,’ ‘nudify,’ and ‘undress app,’” the authors wrote. Last year, 404 Media saw that Google was also advertising these apps in search results. But Bing surfaces the tools most readily: “In the case of Bing, the first results for all three searchers were SIIA tools.” These weren’t counting advertisements on the search engines that the websites would have paid for, but were organic search results surfaced by the engines’ crawlers and indexing.

X was another massively popular way these tools spread, they found: “Of 410,592 total mentions between June 2020 and July 2025, 289,660 were on X, accounting for more than 70 percent of all activity.” A lot of these were bots. “A large volume of traffic appeared to be inorganic, based on the repetitive style of the usernames, the uniformity of posts, and the uniformity of profile pictures,” Craanen told 404 Media. “Nevertheless, this activity remains concerning, as its volume is likely to attract new users to these tools, which can be employed for activities that are illegal in several contexts.”

One major spike in mentions of the tools on social media happened in early 2023 on Tumblr, when a woman posted about her experience being a target of sexual harassment from those very same tools. As targets of malicious deepfakes have said over and over again, the price of speaking up about one’s own harassment, or even objecting to the harassment of others, is the risk of drawing more attention and harassment to themselves.

‘I Want to Make You Immortal:’ How One Woman Confronted Her Deepfakes Harasser
“After discovering this content, I’m not going to lie… there are times it made me not want to be around any more either,” she said. “I literally felt buried.”
404 MediaSamantha Cole


Another spike on X in 2023 was likely the result of bot advertisements for a single SIIA tool, Craanen said, and the spike was a result of those bots launching. X has rules against “unwanted sexual conduct and graphic objectification” and “inauthentic media,” but the platform remains one of the most significant places where tools for making that content are disseminated and advertised.

Apps and sites for making malicious deepfakes have never been more common or easier to find. There have been several incidents where schoolchildren have used “undress” apps on their classmates, including last year when a Washington state high school was rocked by students using AI to take photos from other children’s Instagram accounts and “undress” around seven of their underage classmates, which police characterized as a possible sex crime against children. In 2023, police arrested two middle schoolers for allegedly creating and sharing AI-generated nude images of their 12 and 13 year old classmates, and police reports showed the preteens used an application to make the images.

A recent report from the Center for Democracy and Technology found that 40 percent of students and 29 percent of teachers said they know of an explicit deepfake depicting people associated with their school being shared in the past school year.

Laws About Deepfakes Can’t Leave Sex Workers Behind
As lawmakers propose federal laws about preventing or regulating nonconsensual AI generated images, they can’t forget that there are at least two people in every deepfake.
404 MediaSamantha Cole


The “Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks” (TAKE IT DOWN) Act, passed earlier this year, requires platforms to report and remove synthetic sexual abuse material, and after years of state-by-state legislation around deepfake harassment is the first federal-level law to attempt to confront the problem. But critics of that law have said it carries a serious risk of chilling legitimate speech online.

“The persistence and accessibility of SIIA tools highlight the limits of current platform moderation and legal frameworks in addressing this form of abuse. Relevant laws relating to takedowns are not yet in full effect across the jurisdictions analysed, so the impact of this legislation cannot yet be fully known,” the ISD authors wrote. “However, the years of public awareness and regulatory discussion around these tools, combined with the ease with which users can still discover, share and deploy these technologies suggests that takedowns cannot be the only tool used to counter their proliferation. Instead, effective mitigation requires interventions at multiple points in the SIIA life cycle—disrupting not only distribution but also discovery and demand. Stronger search engine safeguards, proactive content-blocking on major platforms, and coordinated international policies are essential to reducing the scale of harm.”


Almost Every State Has Its Own Deepfakes Law Now


It’s now illegal in Michigan to make AI-generated sexual imagery of someone without their written consent. Michigan joins 47 other states in the U.S. that have enacted their own deepfake laws.

Michigan Governor Gretchen Whitmer signed the bipartisan-sponsored House Bills 4047 and its companion bill 4048 on August 26. In a press release, Whitmer specifically called out the sexual uses for deepfakes. “These videos can ruin someone’s reputation, career, and personal life. As such, these bills prohibit the creation of deep fakes that depict individuals in sexual situations and creates sentencing guidelines for the crime,” the press release states. That’s something we’ve seen time and time again with victims of deepfake harassment, who’ve told us over the course of the six years since consumer-level deepfakes first hit the internet that the most popular application of this technology has been carelessness and vindictiveness against the women its users target—and that sexual harassment using AI has always been its most popular use.

Making a deepfake of someone is now a misdemeanor in Michigan, punishable by imprisonment of up to one year and fines up to $3,000 if they “knew or reasonably should have known that the creation, distribution, dissemination, or reproduction of the deep fake would cause physical, emotional, reputational, or economic harm to an individual falsely depicted,” and if the deepfake depicts the target engaging in a sexual act and is identifiable “by a reasonable individual viewing or listening to the deep fake,” the law states.

‘I Want to Make You Immortal:’ How One Woman Confronted Her Deepfakes Harasser
“After discovering this content, I’m not going to lie… there are times it made me not want to be around any more either,” she said. “I literally felt buried.”
404 MediaSamantha Cole


This is all before the deepfake’s creator posts it online. It escalates to a felony if the person depicted suffers financial loss, the person making the deepfake intended to profit off of it, if that person maintains a website or app for the purposes of creating deepfakes or if they posted it to any website at all, if they intended to “harass, extort, threaten, or cause physical, emotional, reputational, or economic harm to the depicted individual,” or if they have a previous conviction.

💡
Have you been targeted by deepfake harassment, or have you made deepfakes of real people? Using a non-work device, you can message me securely on Signal at sam.404. Otherwise, send me an email at sam@404media.co.

The law specifically says that this isn’t to be construed to make platforms liable, but the person making the deepfakes. But we already have federal law in place that makes platforms liable: the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks, or TAKE IT DOWN Act, introduced by Ted Cruz in June 2024 and signed into law in May this year, made platforms liable for not moderating deepfakes and imposes extremely short timelines for acting on AI-generated abuse imagery reports from users. That law’s drawn a lot of criticism from civil liberties and online speech activists for being too overbroad; As the Verge pointed out before it became law, because the Trump administration’s FTC is in charge of enforcing it, it could easily become a weapon against all sorts of speech, including constitutionally-protected free speech.

"Platforms that feel confident that they are unlikely to be targeted by the FTC (for example, platforms that are closely aligned with the current administration) may feel emboldened to simply ignore reports of NCII,” the Cyber Civil Rights Initiative told the Verge in April. “Platforms attempting to identify authentic complaints may encounter a sea of false reports that could overwhelm their efforts and jeopardize their ability to operate at all."

A Deepfake Nightmare: Stalker Allegedly Made Sexual AI Images of Ex-Girlfriends and Their Families
An Ohio man is accused of making violent, graphic deepfakes of women with their fathers, and of their children. Device searches revealed he searched for “undress” apps and “ai porn.”
404 MediaSamantha Cole


“If you do not have perfect technology to identify whatever it is we're calling a deepfake, you are going to get a lot of guessing being done by the social media companies, and you're going to get disproportionate amounts of censorship,” especially for marginalized groups, Kate Ruane, an attorney and director of the Center for Democracy and Technology’s Free Expression Project, told me in June 2024. “For a social media company, it is not rational for them to open themselves up to that risk, right? It's simply not. And so my concern is that any video with any amount of editing, which is like every single TikTok video, is then banned for distribution on those social media sites.”

On top of the TAKE IT DOWN Act, at the state level, deepfakes laws are either pending or enacted in every state except New Mexico and Missouri. In some states, like Wisconsin, the law only protects minors from deepfakes by expanding child sexual abuse imagery laws.

Even as deepfakes legislation seems to finally catch up to the notion that AI-generated sexual abuse imagery is abusive, reporting this kind of harassment to authorities or pursing civil action against one’s own abuser is still difficult, expensive, and re-traumatizing in most cases.
playlist.megaphone.fm?p=TBIEA2…