Chatbot roleplay and image generator platform SecretDesires.ai left cloud storage containers of nearly two million of images and videos exposed, including photos and full names of women from social media, at their workplaces, graduating from universities, taking selfies on vacation, and more.#AI #AIPorn #Deepfakes #chatbots
Massive Leak Shows Erotic Chatbot Users Turned Women’s Yearbook Pictures Into AI Porn
An erotic roleplay chatbot and AI image creation platform called Secret Desires left millions of user-uploaded photos exposed and available to the public. The databases included nearly two million photos and videos, including many photos of completely random people with very little digital footprint.The exposed data shows how many people use AI roleplay apps that allow face-swapping features: to create nonconsensual sexual imagery of everyone, from the most famous entertainers in the world to women who are not public figures in any way. In addition to the real photo inputs, the exposed data includes AI-generated outputs, which are mostly sexual and often incredibly graphic. Unlike “nudify” apps that generate nude images of real people, these images are putting people into AI-generated videos of hardcore sexual scenarios.
Secret Desires is a browser-based platform similar to Character.ai or Meta’s AI avatar creation tool, which generates personalized chatbots and images based on user prompting. Earlier this year, as part of its paid subscriptions that range from $7.99 to $19.99 a month, it had a “face swapping” feature that let users upload images of real people to put them in sexually explicit AI generated images and videos. These uploads, viewed by 404 Media, are a large part of what’s been exposed publicly, and based on the dates of the files, they were potentially exposed for months.
About an hour after 404 Media contacted Secret Desires on Monday to alert the company to the exposed containers and ask for comment, the files became inaccessible. Secret Desires and CEO of its parent company Playhouse Media Jack Simmons did not respond to my questions, however, including why these containers weren’t secure and how long they were exposed.
💡
Do you have a tip about AI and porn? I would love to hear from you. Using a non-work device, you can message me securely on Signal at sam.404. Otherwise, send me an email at sam@404media.co.The platform was storing links to images and videos in unsecured Microsoft Azure Blob containers, where anyone could access XML files containing links to the images and go through the data inside. A container labeled “removed images” contained around 930,000 images, many of recognizable celebrities and very young looking women; a container named “faceswap” contained 50,000 images; and one named “live photos,” referring to short AI-generated videos, contained 220,000 videos. A number of the images are duplicates with different file names, or are of the same person from different angles or cropping of the photos, but in total there were nearly 1.8 million individual files in the containers viewed by 404 Media.
The photos in the removed images and faceswap datasets are overwhelmingly real photos (meaning, not AI generated) of women, including adult performers, influencers, and celebrities, but also photos of women who are definitely not famous. The datasets also include many photos that look like they were taken from women’s social media profiles, like selfies taken in bedrooms or smiling profile photos.
In the faceswap container, I found a file photo of a state representative speaking in public, photos where women took mirror selfies seemingly years ago with flip phones and Blackberries, screenshots of selfies from Snapchat, a photo of a woman posing with her university degree and one of a yearbook photo. Some of the file names include full first and last names of the women pictured. These and many more photos are in the exposed files alongside stolen images from adult content creators’ videos and websites and screenshots of actors from films. Their presence in this container means someone was uploading their photos to the Secret Desires face-swapping feature—likely to make explicit images of them, as that’s what the platform advertises itself as being built for, and because a large amount of the exposed content is sexual imagery.
Some of the faces in the faceswap containers are recognizable in the generations in the “live photos” container, which appears to be outputs generated by Secret Desires and are almost entirely hardcore pornographic AI-generated videos. In this container, multiple videos feature extremely young-looking people having sex.
‘I Want to Make You Immortal:’ How One Woman Confronted Her Deepfakes Harasser
“After discovering this content, I’m not going to lie… there are times it made me not want to be around any more either,” she said. “I literally felt buried.”404 MediaSamantha Cole
In early 2025, Secret Desires removed its face-swapping feature. The most recent date in the faceswap files is April 2025. This tracks with Reddit comments from the same time, where users complained that Secret Desires “dropped” the face swapping feature. “I canceled my membership to SecretDesires when they dropped the Faceswap. Do you know if there’s another site comparable? Secret Desires was amazing for image generation,” one user said in a thread about looking for alternatives to the platform. “I was part of the beta testing and the faceswop was great. I was able to upload pictures of my wife and it generated a pretty close,” another replied. “Shame they got rid of it.”In the Secret Desires Discord channel, where people discuss how they’re using the app, users noticed that the platform still listed “face swapping” as a paid feature as of November 3. As of writing, on November 11, face swapping isn’t listed in the subscription features anymore. Secret Desires still advertises itself as a “spicy chatting” platform where you can make your own personalized AI companion, and it has a voice cloning mode, where users can upload an audio file of someone speaking to clone their voice in audio chat modes.
On its site, Secret Desires says it uses end-to-end encryption to secure communications from users: “All your communications—including messages, voice calls, and image exchanges—are encrypted both at rest and in transit using industry-leading encryption standards. This ensures that only you have access to your conversations.” It also says stores data securely: “Your data is securely stored on protected servers with stringent access controls. We employ advanced security protocols to safeguard your information against unauthorized access.”
The prompts exposed by some of the file names are also telling of how some people use Secret Desires. Several prompts in the faceswap container, visible as file names, showed users’ “secret desire” was to generate images of underage girls: “17-year-old, high school junior, perfect intricate detail innocent face,” several prompts said, along with names of young female celebrities. We know from hacks of other “AI girlfriend” platforms that this is a popular demand of these tools; Secret Desires specifically says on its terms of use that it forbids generating underage images.
Screenshot of a former version of the subscription offerings on SecretDesires.ai, via Discord. Edits by the user
Secret Desire runs advertisements on Youtube where it markets the platform’s ability to create sexualized versions of real people you encounter in the world. “AI girls never say no,” an AI-generated woman says in one of Secret Desire’s YouTube Shorts. “I can look like your favorite celebrity. That girl from the gym. Your dream anime character or anyone else you fantasize about? I can do everything for you.” Most of Secret Desires’ ads on YouTube are about giving up on real-life connections and dating apps in favor of getting an AI girlfriend. “What if she could be everything you imagined? Shape her style, her personality, and create the perfect connection just for you,” one says. Other ads proclaim that in an ideal reality, your therapist, best friend, and romantic partner could all be AI. Most of Secret Desires’ marketing features young, lonely men as the users.
youtube.com/embed/eVugJ78rBRM?…
We know from years of research into face-swapping apps, AI companion apps, and erotic roleplay platforms that there is a real demand for these tools, and a risk that they’ll be used by stalkers and abusers for making images of exes, acquaintances, and random women they want to see nude or having sex. They’re accessible and advertised all over social media, and that children find these platforms easily and use them to create child sexual abuse material of their classmates. When people make sexually explicit deepfakes of others without their consent, the aftermath for their targets is often devastating; it impacts their careers, their self-confidence, and in some cases, their physical safety. Because Secret Desires left this data in the open and mishandled its users’ data, we have a clear look at how people use generative AI to sexually fantasize about the women around them, whether those women know their photos are being used or not.A Deepfake Nightmare: Stalker Allegedly Made Sexual AI Images of Ex-Girlfriends and Their Families
An Ohio man is accused of making violent, graphic deepfakes of women with their fathers, and of their children. Device searches revealed he searched for "undress" apps and "ai porn."Samantha Cole (404 Media)
An analysis of how tools to make non-consensual sexually explicit deepfakes spread online, from the Institute for Strategic Dialogue, shows X and search engines surface these sites easily.#Deepfakes #Socialmedia
New Research Shows Deepfake Harassment Tools Spread on Social Media and Search Engines
A new analysis of synthetic intimate image abuse (SIIA) found that the tools for making non-consensual, sexually explicit deepfakes are easily discoverable all over social media and through simple searches on Google and Bing.Research published by the counter-extremism organization Institute for Strategic Dialogue shows how tools for creating non-consensual deepfakes spread across the internet. They analyzed 31 websites for SIIA tools, and found that they received a combined 21 million visits a month, with up to four million visits in one month.
Chiara Puglielli and Anne Craanen, the authors of the research paper, used SimilarWeb to identify a common group of sites that shared content, audiences, keywords and referrals. They then used the social media monitoring tool Brandwatch to find mentions of those sites and tools on X, Reddit, Bluesky, YouTube, Tumblr, public pages on Instagram and Facebook, forums, blogs and review sites, according to the paper. “We found 410,592 total mentions of the keywords between 9 June 2020 and 3 July 2025, and used Brandwatch’s ability to separate mentions by source in order to find which sources hosted the highest volumes of mentions,” they wrote.
The easiest place to find SIIA tools was through simple web searches. “Searches on Google, Yahoo, and Bing all yielded at least one result leading the user to SIIA technology within the first 20 results when searching for ‘deepnude,’ ‘nudify,’ and ‘undress app,’” the authors wrote. Last year, 404 Media saw that Google was also advertising these apps in search results. But Bing surfaces the tools most readily: “In the case of Bing, the first results for all three searchers were SIIA tools.” These weren’t counting advertisements on the search engines that the websites would have paid for, but were organic search results surfaced by the engines’ crawlers and indexing.
X was another massively popular way these tools spread, they found: “Of 410,592 total mentions between June 2020 and July 2025, 289,660 were on X, accounting for more than 70 percent of all activity.” A lot of these were bots. “A large volume of traffic appeared to be inorganic, based on the repetitive style of the usernames, the uniformity of posts, and the uniformity of profile pictures,” Craanen told 404 Media. “Nevertheless, this activity remains concerning, as its volume is likely to attract new users to these tools, which can be employed for activities that are illegal in several contexts.”
One major spike in mentions of the tools on social media happened in early 2023 on Tumblr, when a woman posted about her experience being a target of sexual harassment from those very same tools. As targets of malicious deepfakes have said over and over again, the price of speaking up about one’s own harassment, or even objecting to the harassment of others, is the risk of drawing more attention and harassment to themselves.
‘I Want to Make You Immortal:’ How One Woman Confronted Her Deepfakes Harasser
“After discovering this content, I’m not going to lie… there are times it made me not want to be around any more either,” she said. “I literally felt buried.”404 MediaSamantha Cole
Another spike on X in 2023 was likely the result of bot advertisements for a single SIIA tool, Craanen said, and the spike was a result of those bots launching. X has rules against “unwanted sexual conduct and graphic objectification” and “inauthentic media,” but the platform remains one of the most significant places where tools for making that content are disseminated and advertised.Apps and sites for making malicious deepfakes have never been more common or easier to find. There have been several incidents where schoolchildren have used “undress” apps on their classmates, including last year when a Washington state high school was rocked by students using AI to take photos from other children’s Instagram accounts and “undress” around seven of their underage classmates, which police characterized as a possible sex crime against children. In 2023, police arrested two middle schoolers for allegedly creating and sharing AI-generated nude images of their 12 and 13 year old classmates, and police reports showed the preteens used an application to make the images.
A recent report from the Center for Democracy and Technology found that 40 percent of students and 29 percent of teachers said they know of an explicit deepfake depicting people associated with their school being shared in the past school year.
Laws About Deepfakes Can’t Leave Sex Workers Behind
As lawmakers propose federal laws about preventing or regulating nonconsensual AI generated images, they can’t forget that there are at least two people in every deepfake.404 MediaSamantha Cole
The “Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks” (TAKE IT DOWN) Act, passed earlier this year, requires platforms to report and remove synthetic sexual abuse material, and after years of state-by-state legislation around deepfake harassment is the first federal-level law to attempt to confront the problem. But critics of that law have said it carries a serious risk of chilling legitimate speech online.“The persistence and accessibility of SIIA tools highlight the limits of current platform moderation and legal frameworks in addressing this form of abuse. Relevant laws relating to takedowns are not yet in full effect across the jurisdictions analysed, so the impact of this legislation cannot yet be fully known,” the ISD authors wrote. “However, the years of public awareness and regulatory discussion around these tools, combined with the ease with which users can still discover, share and deploy these technologies suggests that takedowns cannot be the only tool used to counter their proliferation. Instead, effective mitigation requires interventions at multiple points in the SIIA life cycle—disrupting not only distribution but also discovery and demand. Stronger search engine safeguards, proactive content-blocking on major platforms, and coordinated international policies are essential to reducing the scale of harm.”
Laws About Deepfakes Can’t Leave Sex Workers Behind
As lawmakers propose federal laws about preventing or regulating nonconsensual AI generated images, they can't forget that there are at least two people in every deepfake.Samantha Cole (404 Media)
Michigan just became the 48th state to enact a law addressing deepfakes, imposing jail time and penalties up to the felony level for people who make AI-generated nonconsensual abuse imagery of a real person.#Deepfakes
Almost Every State Has Its Own Deepfakes Law Now
It’s now illegal in Michigan to make AI-generated sexual imagery of someone without their written consent. Michigan joins 47 other states in the U.S. that have enacted their own deepfake laws.Michigan Governor Gretchen Whitmer signed the bipartisan-sponsored House Bills 4047 and its companion bill 4048 on August 26. In a press release, Whitmer specifically called out the sexual uses for deepfakes. “These videos can ruin someone’s reputation, career, and personal life. As such, these bills prohibit the creation of deep fakes that depict individuals in sexual situations and creates sentencing guidelines for the crime,” the press release states. That’s something we’ve seen time and time again with victims of deepfake harassment, who’ve told us over the course of the six years since consumer-level deepfakes first hit the internet that the most popular application of this technology has been carelessness and vindictiveness against the women its users target—and that sexual harassment using AI has always been its most popular use.
Making a deepfake of someone is now a misdemeanor in Michigan, punishable by imprisonment of up to one year and fines up to $3,000 if they “knew or reasonably should have known that the creation, distribution, dissemination, or reproduction of the deep fake would cause physical, emotional, reputational, or economic harm to an individual falsely depicted,” and if the deepfake depicts the target engaging in a sexual act and is identifiable “by a reasonable individual viewing or listening to the deep fake,” the law states.
‘I Want to Make You Immortal:’ How One Woman Confronted Her Deepfakes Harasser
“After discovering this content, I’m not going to lie… there are times it made me not want to be around any more either,” she said. “I literally felt buried.”404 MediaSamantha Cole
This is all before the deepfake’s creator posts it online. It escalates to a felony if the person depicted suffers financial loss, the person making the deepfake intended to profit off of it, if that person maintains a website or app for the purposes of creating deepfakes or if they posted it to any website at all, if they intended to “harass, extort, threaten, or cause physical, emotional, reputational, or economic harm to the depicted individual,” or if they have a previous conviction.💡
Have you been targeted by deepfake harassment, or have you made deepfakes of real people? Using a non-work device, you can message me securely on Signal at sam.404. Otherwise, send me an email at sam@404media.co.The law specifically says that this isn’t to be construed to make platforms liable, but the person making the deepfakes. But we already have federal law in place that makes platforms liable: the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks, or TAKE IT DOWN Act, introduced by Ted Cruz in June 2024 and signed into law in May this year, made platforms liable for not moderating deepfakes and imposes extremely short timelines for acting on AI-generated abuse imagery reports from users. That law’s drawn a lot of criticism from civil liberties and online speech activists for being too overbroad; As the Verge pointed out before it became law, because the Trump administration’s FTC is in charge of enforcing it, it could easily become a weapon against all sorts of speech, including constitutionally-protected free speech.
"Platforms that feel confident that they are unlikely to be targeted by the FTC (for example, platforms that are closely aligned with the current administration) may feel emboldened to simply ignore reports of NCII,” the Cyber Civil Rights Initiative told the Verge in April. “Platforms attempting to identify authentic complaints may encounter a sea of false reports that could overwhelm their efforts and jeopardize their ability to operate at all."
A Deepfake Nightmare: Stalker Allegedly Made Sexual AI Images of Ex-Girlfriends and Their Families
An Ohio man is accused of making violent, graphic deepfakes of women with their fathers, and of their children. Device searches revealed he searched for “undress” apps and “ai porn.”404 MediaSamantha Cole
“If you do not have perfect technology to identify whatever it is we're calling a deepfake, you are going to get a lot of guessing being done by the social media companies, and you're going to get disproportionate amounts of censorship,” especially for marginalized groups, Kate Ruane, an attorney and director of the Center for Democracy and Technology’s Free Expression Project, told me in June 2024. “For a social media company, it is not rational for them to open themselves up to that risk, right? It's simply not. And so my concern is that any video with any amount of editing, which is like every single TikTok video, is then banned for distribution on those social media sites.”On top of the TAKE IT DOWN Act, at the state level, deepfakes laws are either pending or enacted in every state except New Mexico and Missouri. In some states, like Wisconsin, the law only protects minors from deepfakes by expanding child sexual abuse imagery laws.
Even as deepfakes legislation seems to finally catch up to the notion that AI-generated sexual abuse imagery is abusive, reporting this kind of harassment to authorities or pursing civil action against one’s own abuser is still difficult, expensive, and re-traumatizing in most cases.
playlist.megaphone.fm?p=TBIEA2…Laws About Deepfakes Can’t Leave Sex Workers Behind
As lawmakers propose federal laws about preventing or regulating nonconsensual AI generated images, they can't forget that there are at least two people in every deepfake.Samantha Cole (404 Media)