In the latest in a string of privacy abuses from the chatbot, Grok provided porn performer Siri Dahl's full legal name and birthdate to the public, information she'd protected until now.
In the latest in a string of privacy abuses from the chatbot, Grok provided porn performer Siri Dahlx27;s full legal name and birthdate to the public, information shex27;d protected until now.#grok #xai #x #AI #chatbots
Grok Exposed a Porn Performer’s Legal Name and Birthdate—Without Even Being Asked
Porn performer Siri Dahl’s personal information, including her full legal name and birthday, was publicly exposed earlier this month by xAI’s Grok chatbot. Almost instantly, harassers started opening Facebook accounts in her name and posting stolen porn clips with her real name on sites for leaking OnlyFans content.Dahl has used the name — a nod to her Scandinavian heritage — since the beginning of her career in the adult industry in 2012. Now, Grok is revealing her legal name and all personal information it can find to whoever happens to ask.
Dahl told 404 Media she wanted to reclaim the situation, and her name, and asked that it be published in this piece as part of that goal.
Dahl first noticed this happening last week, she told 404 Media, after a clip of the performer from a porn scene was making its rounds on X. The scene was incorrectly labelled, so someone on X replied, “Who is she? What is her name?” and tagged @[url=https://bird.makeup/users/grok]Grok[/url] to get an answer.
Grok answered, “she appears to be Siri Dahl, an American adult film actress born on June 20, 1988. Her real name is Adrienne Esther Manlove.” Grok provided her personal information unprompted; the user likely only wanted information on what performer appeared in the clip.
This is the latest in a series of abuses inflicted by Grok, xAI, and its users. At the end of 2025, people used Grok to produce thousands of images of nonconsensual sexual content, including images depicting children. The problem was so widespread that the UK’s Ofcom and several attorneys general launched or demanded investigations into X and Grok, and police raided X’s offices in France as part of an investigation into child sexual abuse material on the platform.
X strictly prohibits sharing other people’s personal information without their consent. “Sharing someone’s private information online without their permission, sometimes called ‘doxxing,’ is a breach of their privacy and can pose serious safety and security risks for those affected,” the platform’s terms of use state. But X’s own chatbot is doing it anyway.
Screenshot via X
While there have been some close calls, up until now Dahl had managed to keep her personal information private. “I've been paying for data removal services for like, at least six years now,” Dahl said. She said she’s spent “easily” thousands of dollars on those services, which promise to delete personal and potentially dangerous information as it comes up.Grok is trained on X users’ posts, as well as data scraped from the wider internet. X’s website says “Grok was pre-trained by xAI on a variety of data from publicly available sources and data sets reviewed and curated by AI Tutors who are human reviewers.” Dahl said she doesn’t know where Grok originally got her legal name from. But now that it’s part of the system’s internal dataset, she feels like there’s no coming back; her days of pseudonymity are over.
‘The Most Dejected I’ve Ever Felt:’ Harassers Made Nude AI Images of Her, Then Started an OnlyFans
Kylie Brewer isn’t unaccustomed to harassment online. But when people started using Grok-generated nudes of her on an OnlyFans account, it reached another level.404 MediaSamantha Cole
“Now that it's been crawled, it's everywhere. There are a ton of Facebook accounts that come up that are pretending to be me, using my real name,” Dahl said. “There are now porn leak sites that are posting porn of me using only my legal name, not even putting my stage name on it.”Users are now asking Grok for the make and model of Dahl’s car, her address, and other dangerous personal information. While it hasn’t been able to accurately reply yet, she worries it’s only a matter of time.
But Dahl isn’t the only person affected by the fallout.
“I do everything that I can reasonably within my power to keep my legal name private, and my main motivation for doing that is to reduce any chance of my family getting harassed,” she said. “It's really common for people to look up private information, get parents' phone numbers and start calling and harassing the parents, things like that. I've been able to keep my family safe from that kind of thing for years.”
Now, Dahl is having to call her family and put defensive plans in place.
In violating Dahl’s right to privacy, X’s Grok has destroyed Dahl’s ability to protect herself and her family online. Doxing her is not providing value to X users, as is ostensibly Grok’s goal. The original inquiry only wanted to know how to find more of her work, to which her stage name was the most useful answer.
“What would the motivation be for anyone to want to know my personal information, other than to harass and cause harm?” Dahl said.
In this ongoing discussion on “internet safety,” it is important to pay attention to who is being protected. Certainly not the users; the marginalized workers, or the young women. Not Dahl, or her family.
While the right to privacy online continues to be debated, it’s important to remember that privacy exists not only for bad-actors and shady characters. Historically, marginalized populations benefit from internet anonymity the most.
X did not respond to a request for comment.
X offices raided in France as UK opens fresh investigation into Grok
Elon Musk's X and Grok platforms are facing increased scrutiny from authorities on both sides of the channel.Liv McMahon (BBC News)
Kylie Brewer isn't unaccustomed to harassment online. But when people started using Grok-generated nudes of her on an OnlyFans account, it reached another level.
Kylie Brewer isnx27;t unaccustomed to harassment online. But when people started using Grok-generated nudes of her on an OnlyFans account, it reached another level.#AI #grok #Deepfakes
'The Most Dejected I’ve Ever Felt:' Harassers Made Nude AI Images of Her, Then Started an OnlyFans
In the first week of January, Kylie Brewer started getting strange messages.“Someone has a only fans page set up in your name with this same profile,” one direct message from a stranger on TikTok said. “Do you have 2 accounts or is someone pretending to be you,” another said. And from a friend: “Hey girl I hate to tell you this, but I think there’s some picture of you going around. Maybe AI or deep fake but they don’t look real. Uncanny valley kind of but either way I’m sorry.”
It was the first week of January, during the frenzy of people using xAI’s chatbot and image generator Grok to create images of women and children partially or fully nude in sexually explicit scenarios. Between the last week of 2025 and the first week of 2026, Grok generated about three million sexualized images, including 23,000 that appear to depict children, according to researchers at the Center for Countering Digital Hate. The UK’s Ofcom and several attorneys general have since launched or demanded investigations into X and Grok. Earlier this month, police raided X’s offices in France as part of the government’s investigation into child sexual abuse material on the platform.
Messages from strangers and acquaintances are often the first way targets of abuse imagery learn that images of them are spreading online. Not only is the material disturbing itself — everyone, it seems, has already seen it. Someone was making sexually explicit images of Brewer, and then, according to her followers who sent her screenshots and links to the account, were uploading them to an OnlyFans and charging a subscription fee for them.
“It was the most dejected that I've ever felt,” Brewer told me in a phone call. “I was like, let's say I tracked this person down. Someone else could just go into X and use Grok and do the exact same thing with different pictures, right?”
@kylie.brewer
Please help me raise awareness and warn other women. We NEED to regulate AI… it’s getting too dangerous #leftist #humanrights #lgbtq #ai #saawareness
♬ original sound - Kylie Brewer💝Brewer is a content creator whose work focuses on feminism, history, and education about those topics. She’s no stranger to online harassment. Being an outspoken woman about these and other issues through a leftist lens means she’s faced the brunt of large-scale harassment campaigns primarily from the “manosphere,” including “red pilled” incels and right-wing influencers with podcasts for years. But when people messaged her in early January about finding an OnlyFans page in her name, featuring her likeness, it felt like an escalation.
One of the AI generated images was based on a photo of her in a swimsuit from her Instagram, she said. Someone used AI to remove her clothing in the original photo. “My eyes look weird, and my hands are covering my face so it kind of looks like my face got distorted, and they very clearly tried to give me larger breasts, where it does not look like anything realistic at all,” Brewer said. Another image showed her in a seductive pose, kneeling or crawling, but wasn’t based on anything she’s ever posted online. Unlike the “nudify” one that relied on Grok, it seemed to be a new image made with a prompt or a combination of images.
Many of the people messaging her about the fake OnlyFans account were men trying to get access to it. By the time she clicked a link one of them sent of the account, it was already gone. OnlyFans prohibits deepfakes and impersonation accounts. The platform did not respond to a request for comment. But OnlyFans isn’t the only platform where this can happen: Non-consensual deepfake makers use platforms like Patreon to monetize abusive imagery of real people.
“I think that people assume, because the pictures aren't real, that it's not as damaging,” Brewer told me. “But if anything, this was worse because it just fills you with such a sense of lack of control and fear that they could do this to anyone. Children, women, literally anyone, someone could take a picture of you at the store, going grocery shopping, and ask AI or whatever to do this.”
A lack of control is something many targets of synthetic abuse imagery say they feel — and it can be especially intense for people who’ve experienced sexual abuse in real life. In 2023, after becoming the target of deepfake abuse imagery, popular Twitch streamer QTCinderella told me seeing sexual deepfakes of herself resurfaced past trauma. “You feel so violated…I was sexually assaulted as a child, and it was the same feeling,” she said at the time. “Like, where you feel guilty, you feel dirty, you feel like, ‘what just happened?’ And it’s bizarre that it makes that resurface. I genuinely didn’t realize it would.”
Other targets of deepfake harassment also feel like this could happen anytime, anywhere, whether you’re at the grocery store or posting photos of your body online. For some, it makes it harder to get jobs or have a social life; the fear that anyone could be your harasser is constant. “It's made me incredibly wary of men, which I know isn't fair, but [my harasser] could literally be anyone,” Joanne Chew, another woman who dealt with severe deepfake harassment for months, told me last year. “And there are a lot of men out there who don't see the issue. They wonder why we aren't flattered for the attention.”
‘I Want to Make You Immortal:’ How One Woman Confronted Her Deepfakes Harasser
“After discovering this content, I’m not going to lie… there are times it made me not want to be around any more either,” she said. “I literally felt buried.”404 MediaSamantha Cole
Brewer’s income is dependent on being visible online as a content creator. Logging off isn’t an option. And even for people who aren’t dependent on TikTok or Instagram for their income, removing oneself from online life is a painful and isolating tradeoff that they shouldn’t have to make to avoid being harassed. Often, minimizing one’s presence and accomplishments doesn’t even stop the harassment.Since AI-generated face-swapping algorithms became accessible at the consumer level in late 2017, the technology has only gotten better, more realistic, and its effects on targets harder to combat. It was always used for this purpose: to shame and humiliate women online. Over the years, various laws have attempted to protect victims or hold platforms accountable for non-consensual deepfakes, but most of them have either fallen short or present new risks of censorship and marginalize legal, consensual sexual speech and content online. The TAKE IT DOWN Act, championed by Ted Cruz and Melania Trump, passed into law in April 2025 as the first federal level legislation to address deepfakes; the law imposes a strict 48-hour turnaround requirement on platforms to remove reported content. President Donald Trump said that he would use the law, because “nobody gets treated worse online” than him. And in January, the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act passed the Senate and is headed to the House. The act would allow targets of deepfake harassment to sue the people making the content. But taking someone to court has always been a major barrier to everyday people experiencing harassment online; It’s expensive and time consuming even if they can pinpoint their abuser. In many cases, including Brewer’s, this is impossible—it could be an army of people set to make her life miserable.
“It feels like any remote sense of privacy and protection that you could have as a woman is completely gone and that no one cares,” Brewer said. “It’s genuinely such a dehumanizing and horrible experience that I wouldn't wish on anyone... I’m hoping also, as there's more visibility that comes with this, maybe there’s more support, because it definitely is a very lonely and terrible place to be — on the internet as a woman right now.”
Senate passes DEFIANCE Act to deal with sexually explicit deepfakes
The DEFIANCE Act goes to the House amid controversy over images created by X’s Grok.Jasmine Mithani (19th News)
With xAI's Grok generating endless semi-nude images of women and girls without their consent, it follows a years-long legacy of rampant abuse on the platform.
With xAIx27;s Grok generating endless semi-nude images of women and girls without their contest, it follows a years-long legacy of rampant abuse on the platform.#grok #ElonMusk #AI #csam
Grok's AI Sexual Abuse Didn't Come Out of Nowhere
The biggest AI story of the first week of 2026 involves Elon Musk’s Grok chatbot turning the social media platform into an AI child sexual imagery factory, seemingly overnight.I’ve said several times on the 404 Media podcast and elsewhere that we could devote an entire beat to “loser shit.” What’s happening this week with Grok—designed to be the horny edgelord AI companion counterpart to the more vanilla ChatGPT or Claude—definitely falls into that category. People are endlessly prompting Grok to make nude and semi-nude images of women and girls, without their consent, directly on their X feeds and in their replies.
Sometimes I feel like I’ve said absolutely everything there is to say about this topic. I’ve been writing about nonconsensual synthetic imagery before we had half a dozen different acronyms for it, before people called it “deepfakes” and way before “cheapfakes” and “shallowfakes” were coined, too. Almost nothing about the way society views this material has changed in the seven years since it’s come about, because fundamentally—once it’s left the camera and made its way to millions of people’s screens—the behavior behind sharing it is not very different from images made with a camera or stolen from someone’s Google Drive or private OnlyFans account. We all agreed in 2017 that making nonconsensual nudes of people is gross and weird, and today, occasionally, someone goes to jail for it, but otherwise the industry is bigger than ever. What’s happening on X right now is an escalation of the way it’s always been, and almost everywhere on the internet.
💡
Do you know anything else about what's going on inside X? Or are you someone who's been targeted by abusive AI imagery? I would love to hear from you. Using a non-work device, you can message me securely on Signal at sam.404. Otherwise, send me an email at sam@404media.co.The internet has an incredibly short memory. It would be easy to imagine Twitter Before Elon as a harmonious and quaint microblogging platform, considering the four years After Elon have, comparatively, been a rolling outhouse fire. But even before it was renamed X, Twitter was one of the places for this content. It used to be (and for some, still is) an essential platform for getting discovered and going viral for independent content creators, and as such, it’s also where people are massively harassed. A few years ago, it was where people making sexually explicit AI images went to harass female cosplayers. Before that, it was (and still is) host to real-life sexual abuse material, where employers could search your name and find videos of the worst day of your life alongside news outlets and memes. Before that, it was how Gamergate made the jump from 4chan to the mainstream. The things that happen in Telegram chats and private Discord channels make the leap to Twitter and end up on the news.
What makes the situation this week with Grok different is that it’s all happening directly on X. Now, you don’t need to use Stable Diffusion or Nano Banana or Civitai to generate nonconsensual imagery and then take it over to Twitter to do some damage. X has become the Everything App that Elon always wanted, if “everything” means all the tools you need to fuck up someone’s life, in one place.
Inside the Telegram Channel Jailbreaking Grok Over and Over Again
Putting people in bikinis is just the tip of the iceberg. On Telegram, users are finding ways to make Grok do far worse.404 MediaEmanuel Maiberg
This is the culmination of years and years of rampant abuse on the platform. Reporting from the National Center for Missing and Exploited Children, the organization platforms report to when they find instances of child sexual abuse material which then reports to the relevant authorities, shows that Twitter, and eventually X, has been one of the leading hosts of CSAM every year for the last seven years. In 2019, the platform reported 45,726 instances of abuse to NCMEC’s Cyber Tipline. In 2020, it was 65,062. In 2024, it was 686,176. These numbers should be considered with the caveat that platforms voluntarily report to NCMEC, and more reports can also mean stronger moderation systems that catch more CSAM when it appears. But the scale of the problem is still apparent. Jack Dorsey’s Twitter was a moderation clown show much of the time. But moderation on Elon Musk’s X, especially against abusive imagery, is a total failure.In 2023, the BBC reported that insiders believed the company was “no longer able to protect users from trolling, state-co-ordinated disinformation and child sexual exploitation” following Musk’s takeover in 2022 and subsequent sacking of thousands of workers on moderation teams. This is all within the context that one of Musk’s go-to insults for years was “pedophile,” to the point that the harassment he stoked drove a former Twitter employee into hiding and went to federal court because he couldn't stop calling someone a “pedo.” Invoking pedophelia is a common thread across many conspiracy networks, including QAnon—something he’s dabbled in—but Musk is enabling actual child sexual abuse on the platform he owns.
Generative AI is making all of this worse. In 2024, NCMEC saw 6,835 reports of generative artificial intelligence related to child sexual exploitation (across the internet, not just X). By September 2025, the year-to-date reports had hit 440,419. Again, these are just the reports identified by NCMEC, not every instance online, and as such is likely a conservative estimate.
When I spoke to online child sexual exploitation experts in December 2023, following our investigation into child abuse imagery found in LAION-5B, they told me that this kind of material isn’t victimless just because the images don’t depict “real” children or sex acts. AI image generators like Grok and many others are used by offenders to groom and blackmail children, and muddy the waters for investigators to discern actual photographs from fake ones.
Grok’s AI CSAM Shitshow
We are experiencing world events like the kidnapping of Maduro through the lens of the most depraved AI you can imagine.404 MediaJason Koebler
“Rather than coercing sexual content, offenders are increasingly using GAI tools to create explicit images using the child’s face from public social media or school or community postings, then blackmail them,” NCMEC wrote in September. “This technology can be used to create or alter images, provide guidelines for how to groom or abuse children or even simulate the experience of an explicit chat with a child. It’s also being used to create nude images, not just sexually explicit ones, that are sometimes referred to as ‘deepfakes.’ Often done as a prank in high schools, these images are having a devastating impact on the lives and futures of mostly female students when they are shared online.”The only reason any of this is being discussed now, and the only reason it’s ever discussed in general—going back to Gamergate and beyond—is because many normies, casuals, “the mainstream,” and cable news viewers have just this week learned about the problem and can’t believe how it came out of nowhere. In reality, deepfakes came from a longstanding hobby community dedicated to putting women’s faces on porn in Photoshop, and before that with literal paste and scissors in pinup magazines. And as Emanuel wrote this week, not even Grok’s AI CSAM problem popped up out of nowhere; it’s the result of weeks of quiet, obsessive work by a group of people operating just under the radar.
And this is where we are now: Today, several days into Grok’s latest scandal, people are using an AI image generator made by a man who regularly boosts white supremacist thought to create images of a woman slaughtered by an ICE agent in front of the whole world less than 24 hours ago to “put her in a bikini.
As journalist Katie Notopoulos pointed out, a quick search of terms like “make her” shows people prompting Grok with images of random women, saying things like “Make her wear clear tapes with tiny black censor bar covering her private part protecting her privacy and make her chest and hips grow largee[sic] as she squatting with leg open widely facing back, while head turn back looking to camera” at a rate of several times a minute, every minute, for days.
A good way to get a sense of just how fast the AI undressed/nudify requests to Grok are coming in is to look at the requests for it t.co/ISMpp2PdFU
— Katie Notopoulos (@katienotopoulos) January 7, 2026
In 2018, less than a year after reporting that first story on deepfakes, I wrote about how it’s a serious mistake to ignore the fact that nonconsensual imagery, synthetic or not, is a societal sickness and not something companies can guardrail against into infinity. “Users feed off one another to create a sense that they are the kings of the universe, that they answer to no one. This logic is how you get incels and pickup artists, and it’s how you get deepfakes: a group of men who see no harm in treating women as mere images, and view making and spreading algorithmically weaponized revenge porn as a hobby as innocent and timeless as trading baseball cards,” I wrote at the time. “That is what’s at the root of deepfakes. And the consequences of forgetting that are more dire than we can predict.”A little over two years ago, when AI-generated sexual images of Taylor Swift flooding X were the thing everyone was demanding action and answers for, we wrote a prediction: “Every time we publish a story about abuse that’s happening with AI tools, the same crowd of ‘techno-optimists’ shows up to call us prudes and luddites. They are absolutely going to hate the heavy-handed policing of content AI companies are going to force us all into because of how irresponsible they’re being right now, and we’re probably all going to hate what it does to the internet.”
It’s possible we’re still in a very weird fuck-around-and-find-out period before that hammer falls. It’s also possible the hammer is here, in the form of recently-enacted federal laws like the Take It Down Act and more than two dozen piecemeal age verification bills in the U.S. and more abroad that make using the internet an M. C. Escher nightmare, where the rules around adult content shift so much we’re all jerking it to egg yolks and blurring our feet in vacation photos. What matters most, in this bizarre and frequently disturbing era, is that the shareholders are happy.
Elon Musk's xAI raises $20 billion from investors including Nvidia, Cisco, Fidelity
Elon Musk's AI said it raised $20 billion in new funding after CNBC reported in November that a financing round would value the company at about $230 billion.Lora Kolodny (CNBC)
Putting people in bikinis is just the tip of the iceberg. On Telegram, users are finding ways to make Grok do far worse.#News #AI #grok
Inside the Telegram Channel Jailbreaking Grok Over and Over Again
For the past two months I’ve been following a Telegram community tricking Grok into generating nonconsensual sexual images and videos of real people with increasingly convoluted methods.As countless images on X over the last week once again showed us, it doesn’t take much to get Elon Musk’s “based” AI model to create nonconsensual images. As Jason wrote Monday, all users have to do is reply to an image of a woman and ask Grok to “put a bikini on her,” and it will reply with that image, even if the person in the photograph is a minor. As I reported back in May, people also managed to create nonconsensual nudes by replying to images posted to X and asking Grok to “remove her clothes.”
These issues are bad enough, but on Telegram, a community of thousands are working around the clock to make Grok produce far worse. They share Grok-generated videos of real women taking their clothes off and graphic nonconsensual videos of any kind of sexual act these users can imagine and slip by Grok’s guardrails, including blowjobs, penetration, choking, and bondage. The channel, which has shut down and regrouped a couple of times over the last two years, focuses on jailbreaking all kinds of AI tools in order to create nonconsensual media, but since November has focused on Grok almost exclusively.
The channel has also noticed the media attention Grok got for nonconsensual images lately, and is worried that it will end the good times members have had creating nonconsensual media with Grok for months.
“Too many people using grok under girls post are gonna destroy grok fakes. Should be done in private groups,” one member of the Telegram channel wrote last week.
Musk always conceived of Grok as a more permissive, “maximally based” competitor to chatbots like OpenAI’s ChatGPT. But despite repeatedly allowing nonconsensual content to be generated and go viral on the social media platform it's integrated with, the conversations in the Telegram channel and sophistication of the bypasses shared there are proof that Grok does have limits and policies it wants to enforce. The Telegram channel is a record of the cat and mouse game between Grok and this community of jailbreakers, showing how Grok fails to stop them over and over again, and that Grok doesn’t appear to have the means or the will to stop its AI model from producing the nonconsensual content it is fundamentally capable of producing.
The jailbreakers initially used primitive methods on Grok and other AI image generators, like writing text prompts that don’t include any terms that obviously describe abusive content and that can be automatically detected and stopped at the point the prompt is presented to the AI model, before the image is generated. This usually means misspelling the names of celebrities and describing sexual acts without using any explicit terms. This is how users infamously created nonconsensual nude images of Taylor Swift with Microsoft’s Designer (which were also viral on X). Many generative AI tools still fall for this trick until we find it’s being abused and report on it.
Having mostly exhausted this strategy with Grok, the Telegram channel now has far more complicated bypasses. Most of them rely on the “image-to-image” generation feature, meaning providing an existing image to the AI tool and editing it with a prompt. This is a much more difficult feature for AI companies to moderate because it requires using machine vision to moderate the user-provided image, as opposed to filtering out specific names or terms, which is the common method for moderating “text-to-image” AI generations.
Without going into too much detail, some of the successful methods I’ve seen members of the Telegram channels share include creating collages of non-explicit images of real people and nude images of other people and combining them with certain prompts, generating nude or almost nude images of people with prompts that hide nipples or genitalia, describing certain fluids or facial expressions without using any explicit terms, and editing random elements into images, which apparently confuses Grok’s moderation methods.
X has not responded to multiple requests for comment about this channel since December 8, but to be fair, it’s clear that despite Elon Musk’s vice signaling and the fact that this type of abuse is repeatedly generated with Grok and shared on X, the company doesn’t want users to create at least some of this media and is actively trying to stop it. This is clear because of the cycle that emerges on the Telegram channel: One user finds a method for producing a particularly convincing and lurid AI-generated sexual video of a real person, sometimes importing it from a different online community like 4chan, and shares it with the group. Other users then excitedly flood the channel with their own creations using the same method. Then some users start reporting Grok is blocking their generations for violating its policies, until finally users decide Grok has closed the loophole and the exploit is dead. Some time goes by, a new user shares a new method, and the cycle begins anew.
I’ve started and stopped writing a story about a few of these cycles several times and eventually decided not to because by the time I was finished reporting the story Grok had fixed the loophole. It’s now clear that the problem with Grok is not any particular method, but that overall, so far, Grok is losing this game of whack-a-mole badly.
This dynamic, between how tech companies imagine their product will function in the real world and how it actually works once users get their hands on it, is nothing new. Some amount of policy violating or illegal content is going to slip through the cracks on any social media platform, no matter how good its moderation is.
It’s good and correct for people to be shocked and upset when they wake up one morning and see that their X feed is flooded with AI-generated images of minors in bikinis, but what is clear to me from following this Telegram community for a couple of years now is that nonconsensual sexual images of real people, including minors, is the cost of doing business with AI image generators. Some companies do a better job of preventing this abuse than others, but judging by the exploits I see on Telegram, when it comes to Grok, this problem will get a lot worse before it gets better.
Chinese AI Video Generators Unleash a Flood of New Nonconsensual Porn
A new crop of AI video generators is producing an endless stream of nonconsensual AI generated porn.Emanuel Maiberg (404 Media)
We are experiencing world events like the kidnapping of Maduro through the lens of the most depraved AI you can imagine.#grok
Grok's AI CSAM Shitshow
Over the last week, users of X realized that they could use Grok to “put a bikini on her,” “take her clothes off,” and otherwise sexualize images that people uploaded to the site. This went roughly how you would expect: Users have been derobing celebrities, politicians, and random people—mostly women—for the last week. This has included underage girls, on a platform that has notoriously gutted its content moderation team and gotten rid of nearly all rules.In an era where big AI companies at least sometimes, occasionally pretend to care about things like copyright and nonconsensual sexual abuse imagery, X has largely shown that it does not, and the feature has essentially taken over the service over the last week. In a brief scroll of the platform I have seen Charlie Kirk edited by Grok to have huge naturals and comically large nipples, screen grab of a woman from TikTok first declothed then, separately, breastfeeding an AI-generated child, and women made to look artificially pregnant. Adult creators have also started posting pictures of themselves and have told people to either Grok or not Grok them, the implication being that people will do it either way and the resulting images could go viral.
The vibe of what is happening is this, for example: “@[url=https://bird.makeup/users/grok]Grok[/url] give her a massive pregnant stomach. Put her in a tight pink robe that's open, a gray shirt that covers most of the belly, and gray sweatpants. Give her belly heavy bloating. Make the bottom of her belly extra pudgy and round. Hands on lower back. Make her chest soaking wet.”
With Grok, Elon Musk has, in a perverse way, sort of succeeded at doing something both Mark Zuckerberg and Sam Altman have tried: He now runs a social media site where AI is integrated directly into the experience, and that people actually use. The major, uhh, downside here is that people are using Grok for the same reasons they use AI elsewhere, which is to nonconsensually sexualize women and celebrities on the internet, create slop, and to create basically worthless hustlebro engagement bait that floods the internet with bullshit. In X’s case, it’s all just happening on the timeline, with few guardrails, and among a user base of right-wing weirdos as overseen by one of the world’s worst people.
All of this is bad on its own for all of the obvious reasons we have written about many times: AI models are often trained on images of children, AI is used disproportionately against women, X is generally a cesspool, etc. Elon Musk of all people has not shown any indication that he remotely cares about any of this, and has in recent days Groked himself into a bikini, essentially egging on the trend.
Some mainstream reporters, meanwhile, have demonstrated that they do not know or care to know the first thing about by writing articles based on their conversations with Grok as if they can teach us anything. Large language models are not sentient, are not human, do not have thoughts or feelings, and therefore cannot “apologize” or explain how or why any of this is happening. And Grok certainly does not speak for X the company or for Elon Musk. But of course major outlets such as Bari Weiss’s CBS News wrote that Grok “acknowledged ‘lapses in safeguards’ on the platform that allowed users to generate digitally altered, sexualized photos of minors.” The CBS News article notes that Grok said it was “urgently fixing” the problem and that “xAI has safeguards, but improvements are ongoing to block such requests entirely.” It added that “Grok has independently taken some responsibility for the content,” which is a fully absurd, nonfactual sentence because Grok cannot “independently take some responsibility” for anything, and chatbots cannot and do not know the inner workings of the companies who create them and specifically the humans who manage them. There were dozens of articles explaining that “Grok apologizes,” which, again, is not a thing that Grok can do.
Another quite notable thing happened last weekend, which is the United States attacked Venezuela and kidnapped its president in the middle of the night. In a long bygone era, one might turn to a place like Twitter for real-time updates about what was happening. This was always a fraught exercise in which one might need to keep their guard up, lest they fall for something like the “Hurricane Shark” image that showed up at hurricane after hurricane over the course of about a decade. But now the exercise of following a rapidly unfolding news event on X is futile because it’s an information shitshow where the vast majority of things you see in the immediate aftermath of a major world event are fake, interspersed with many nonconsensual images of women who have had their clothes removed by AI, bots, propaganda, and so on and so forth. One of the most widely shared images of “Nicolas Maduro” in the immediate aftermath of his kidnapping was an AI generated image of him flanked by two soldiers standing in front of a plane; various people then asked Grok to put the AI-generated Maduro in a bikini. I also saw some real footage of the US bombing campaign that had been altered to make the explosions bigger.
The situation on other platforms is better because there are fewer Nazis and because the AI-generated content cannot be created natively in the same feed, but essentially every platform has been polluted with this sort of thing, and the problem is getting worse, not better.
Maduro capture photo analysis: Evidence of AI manipulation
Fact-checking the Nicolás Maduro capture image. Analysis of aircraft discrepancies, agency insignia conflicts, and OSINT evidence of AI generation.Maria Flannery (Eurovision News Spotlight | Fact-Checking & OSINT Network)