Salta al contenuto principale


With xAI's Grok generating endless semi-nude images of women and girls without their consent, it follows a years-long legacy of rampant abuse on the platform.

With xAIx27;s Grok generating endless semi-nude images of women and girls without their contest, it follows a years-long legacy of rampant abuse on the platform.#grok #ElonMusk #AI #csam


Grok's AI Sexual Abuse Didn't Come Out of Nowhere


The biggest AI story of the first week of 2026 involves Elon Musk’s Grok chatbot turning the social media platform into an AI child sexual imagery factory, seemingly overnight.

I’ve said several times on the 404 Media podcast and elsewhere that we could devote an entire beat to “loser shit.” What’s happening this week with Grok—designed to be the horny edgelord AI companion counterpart to the more vanilla ChatGPT or Claude—definitely falls into that category. People are endlessly prompting Grok to make nude and semi-nude images of women and girls, without their consent, directly on their X feeds and in their replies.

Sometimes I feel like I’ve said absolutely everything there is to say about this topic. I’ve been writing about nonconsensual synthetic imagery before we had half a dozen different acronyms for it, before people called it “deepfakes” and way before “cheapfakes” and “shallowfakes” were coined, too. Almost nothing about the way society views this material has changed in the seven years since it’s come about, because fundamentally—once it’s left the camera and made its way to millions of people’s screens—the behavior behind sharing it is not very different from images made with a camera or stolen from someone’s Google Drive or private OnlyFans account. We all agreed in 2017 that making nonconsensual nudes of people is gross and weird, and today, occasionally, someone goes to jail for it, but otherwise the industry is bigger than ever. What’s happening on X right now is an escalation of the way it’s always been, and almost everywhere on the internet.

💡
Do you know anything else about what's going on inside X? Or are you someone who's been targeted by abusive AI imagery? I would love to hear from you. Using a non-work device, you can message me securely on Signal at sam.404. Otherwise, send me an email at sam@404media.co.

The internet has an incredibly short memory. It would be easy to imagine Twitter Before Elon as a harmonious and quaint microblogging platform, considering the four years After Elon have, comparatively, been a rolling outhouse fire. But even before it was renamed X, Twitter was one of the places for this content. It used to be (and for some, still is) an essential platform for getting discovered and going viral for independent content creators, and as such, it’s also where people are massively harassed. A few years ago, it was where people making sexually explicit AI images went to harass female cosplayers. Before that, it was (and still is) host to real-life sexual abuse material, where employers could search your name and find videos of the worst day of your life alongside news outlets and memes. Before that, it was how Gamergate made the jump from 4chan to the mainstream. The things that happen in Telegram chats and private Discord channels make the leap to Twitter and end up on the news.

What makes the situation this week with Grok different is that it’s all happening directly on X. Now, you don’t need to use Stable Diffusion or Nano Banana or Civitai to generate nonconsensual imagery and then take it over to Twitter to do some damage. X has become the Everything App that Elon always wanted, if “everything” means all the tools you need to fuck up someone’s life, in one place.

Inside the Telegram Channel Jailbreaking Grok Over and Over Again
Putting people in bikinis is just the tip of the iceberg. On Telegram, users are finding ways to make Grok do far worse.
404 MediaEmanuel Maiberg


This is the culmination of years and years of rampant abuse on the platform. Reporting from the National Center for Missing and Exploited Children, the organization platforms report to when they find instances of child sexual abuse material which then reports to the relevant authorities, shows that Twitter, and eventually X, has been one of the leading hosts of CSAM every year for the last seven years. In 2019, the platform reported 45,726 instances of abuse to NCMEC’s Cyber Tipline. In 2020, it was 65,062. In 2024, it was 686,176. These numbers should be considered with the caveat that platforms voluntarily report to NCMEC, and more reports can also mean stronger moderation systems that catch more CSAM when it appears. But the scale of the problem is still apparent. Jack Dorsey’s Twitter was a moderation clown show much of the time. But moderation on Elon Musk’s X, especially against abusive imagery, is a total failure.

In 2023, the BBC reported that insiders believed the company was “no longer able to protect users from trolling, state-co-ordinated disinformation and child sexual exploitation” following Musk’s takeover in 2022 and subsequent sacking of thousands of workers on moderation teams. This is all within the context that one of Musk’s go-to insults for years was “pedophile,” to the point that the harassment he stoked drove a former Twitter employee into hiding and went to federal court because he couldn't stop calling someone a “pedo.” Invoking pedophelia is a common thread across many conspiracy networks, including QAnon—something he’s dabbled in—but Musk is enabling actual child sexual abuse on the platform he owns.

Generative AI is making all of this worse. In 2024, NCMEC saw 6,835 reports of generative artificial intelligence related to child sexual exploitation (across the internet, not just X). By September 2025, the year-to-date reports had hit 440,419. Again, these are just the reports identified by NCMEC, not every instance online, and as such is likely a conservative estimate.

When I spoke to online child sexual exploitation experts in December 2023, following our investigation into child abuse imagery found in LAION-5B, they told me that this kind of material isn’t victimless just because the images don’t depict “real” children or sex acts. AI image generators like Grok and many others are used by offenders to groom and blackmail children, and muddy the waters for investigators to discern actual photographs from fake ones.

Grok’s AI CSAM Shitshow
We are experiencing world events like the kidnapping of Maduro through the lens of the most depraved AI you can imagine.
404 MediaJason Koebler


“Rather than coercing sexual content, offenders are increasingly using GAI tools to create explicit images using the child’s face from public social media or school or community postings, then blackmail them,” NCMEC wrote in September. “This technology can be used to create or alter images, provide guidelines for how to groom or abuse children or even simulate the experience of an explicit chat with a child. It’s also being used to create nude images, not just sexually explicit ones, that are sometimes referred to as ‘deepfakes.’ Often done as a prank in high schools, these images are having a devastating impact on the lives and futures of mostly female students when they are shared online.”

The only reason any of this is being discussed now, and the only reason it’s ever discussed in general—going back to Gamergate and beyond—is because many normies, casuals, “the mainstream,” and cable news viewers have just this week learned about the problem and can’t believe how it came out of nowhere. In reality, deepfakes came from a longstanding hobby community dedicated to putting women’s faces on porn in Photoshop, and before that with literal paste and scissors in pinup magazines. And as Emanuel wrote this week, not even Grok’s AI CSAM problem popped up out of nowhere; it’s the result of weeks of quiet, obsessive work by a group of people operating just under the radar.

And this is where we are now: Today, several days into Grok’s latest scandal, people are using an AI image generator made by a man who regularly boosts white supremacist thought to create images of a woman slaughtered by an ICE agent in front of the whole world less than 24 hours ago to “put her in a bikini.

As journalist Katie Notopoulos pointed out, a quick search of terms like “make her” shows people prompting Grok with images of random women, saying things like “Make her wear clear tapes with tiny black censor bar covering her private part protecting her privacy and make her chest and hips grow largee[sic] as she squatting with leg open widely facing back, while head turn back looking to camera” at a rate of several times a minute, every minute, for days.

A good way to get a sense of just how fast the AI undressed/nudify requests to Grok are coming in is to look at the requests for it t.co/ISMpp2PdFU
— Katie Notopoulos (@katienotopoulos) January 7, 2026


In 2018, less than a year after reporting that first story on deepfakes, I wrote about how it’s a serious mistake to ignore the fact that nonconsensual imagery, synthetic or not, is a societal sickness and not something companies can guardrail against into infinity. “Users feed off one another to create a sense that they are the kings of the universe, that they answer to no one. This logic is how you get incels and pickup artists, and it’s how you get deepfakes: a group of men who see no harm in treating women as mere images, and view making and spreading algorithmically weaponized revenge porn as a hobby as innocent and timeless as trading baseball cards,” I wrote at the time. “That is what’s at the root of deepfakes. And the consequences of forgetting that are more dire than we can predict.”

A little over two years ago, when AI-generated sexual images of Taylor Swift flooding X were the thing everyone was demanding action and answers for, we wrote a prediction: “Every time we publish a story about abuse that’s happening with AI tools, the same crowd of ‘techno-optimists’ shows up to call us prudes and luddites. They are absolutely going to hate the heavy-handed policing of content AI companies are going to force us all into because of how irresponsible they’re being right now, and we’re probably all going to hate what it does to the internet.”

It’s possible we’re still in a very weird fuck-around-and-find-out period before that hammer falls. It’s also possible the hammer is here, in the form of recently-enacted federal laws like the Take It Down Act and more than two dozen piecemeal age verification bills in the U.S. and more abroad that make using the internet an M. C. Escher nightmare, where the rules around adult content shift so much we’re all jerking it to egg yolks and blurring our feet in vacation photos. What matters most, in this bizarre and frequently disturbing era, is that the shareholders are happy.




Putting people in bikinis is just the tip of the iceberg. On Telegram, users are finding ways to make Grok do far worse.#News #AI #grok


Inside the Telegram Channel Jailbreaking Grok Over and Over Again


For the past two months I’ve been following a Telegram community tricking Grok into generating nonconsensual sexual images and videos of real people with increasingly convoluted methods.

As countless images on X over the last week once again showed us, it doesn’t take much to get Elon Musk’s “based” AI model to create nonconsensual images. As Jason wrote Monday, all users have to do is reply to an image of a woman and ask Grok to “put a bikini on her,” and it will reply with that image, even if the person in the photograph is a minor. As I reported back in May, people also managed to create nonconsensual nudes by replying to images posted to X and asking Grok to “remove her clothes.”

These issues are bad enough, but on Telegram, a community of thousands are working around the clock to make Grok produce far worse. They share Grok-generated videos of real women taking their clothes off and graphic nonconsensual videos of any kind of sexual act these users can imagine and slip by Grok’s guardrails, including blowjobs, penetration, choking, and bondage. The channel, which has shut down and regrouped a couple of times over the last two years, focuses on jailbreaking all kinds of AI tools in order to create nonconsensual media, but since November has focused on Grok almost exclusively.

The channel has also noticed the media attention Grok got for nonconsensual images lately, and is worried that it will end the good times members have had creating nonconsensual media with Grok for months.

“Too many people using grok under girls post are gonna destroy grok fakes. Should be done in private groups,” one member of the Telegram channel wrote last week.

Musk always conceived of Grok as a more permissive, “maximally based” competitor to chatbots like OpenAI’s ChatGPT. But despite repeatedly allowing nonconsensual content to be generated and go viral on the social media platform it's integrated with, the conversations in the Telegram channel and sophistication of the bypasses shared there are proof that Grok does have limits and policies it wants to enforce. The Telegram channel is a record of the cat and mouse game between Grok and this community of jailbreakers, showing how Grok fails to stop them over and over again, and that Grok doesn’t appear to have the means or the will to stop its AI model from producing the nonconsensual content it is fundamentally capable of producing.

The jailbreakers initially used primitive methods on Grok and other AI image generators, like writing text prompts that don’t include any terms that obviously describe abusive content and that can be automatically detected and stopped at the point the prompt is presented to the AI model, before the image is generated. This usually means misspelling the names of celebrities and describing sexual acts without using any explicit terms. This is how users infamously created nonconsensual nude images of Taylor Swift with Microsoft’s Designer (which were also viral on X). Many generative AI tools still fall for this trick until we find it’s being abused and report on it.

Having mostly exhausted this strategy with Grok, the Telegram channel now has far more complicated bypasses. Most of them rely on the “image-to-image” generation feature, meaning providing an existing image to the AI tool and editing it with a prompt. This is a much more difficult feature for AI companies to moderate because it requires using machine vision to moderate the user-provided image, as opposed to filtering out specific names or terms, which is the common method for moderating “text-to-image” AI generations.

Without going into too much detail, some of the successful methods I’ve seen members of the Telegram channels share include creating collages of non-explicit images of real people and nude images of other people and combining them with certain prompts, generating nude or almost nude images of people with prompts that hide nipples or genitalia, describing certain fluids or facial expressions without using any explicit terms, and editing random elements into images, which apparently confuses Grok’s moderation methods.

X has not responded to multiple requests for comment about this channel since December 8, but to be fair, it’s clear that despite Elon Musk’s vice signaling and the fact that this type of abuse is repeatedly generated with Grok and shared on X, the company doesn’t want users to create at least some of this media and is actively trying to stop it. This is clear because of the cycle that emerges on the Telegram channel: One user finds a method for producing a particularly convincing and lurid AI-generated sexual video of a real person, sometimes importing it from a different online community like 4chan, and shares it with the group. Other users then excitedly flood the channel with their own creations using the same method. Then some users start reporting Grok is blocking their generations for violating its policies, until finally users decide Grok has closed the loophole and the exploit is dead. Some time goes by, a new user shares a new method, and the cycle begins anew.

I’ve started and stopped writing a story about a few of these cycles several times and eventually decided not to because by the time I was finished reporting the story Grok had fixed the loophole. It’s now clear that the problem with Grok is not any particular method, but that overall, so far, Grok is losing this game of whack-a-mole badly.

This dynamic, between how tech companies imagine their product will function in the real world and how it actually works once users get their hands on it, is nothing new. Some amount of policy violating or illegal content is going to slip through the cracks on any social media platform, no matter how good its moderation is.

It’s good and correct for people to be shocked and upset when they wake up one morning and see that their X feed is flooded with AI-generated images of minors in bikinis, but what is clear to me from following this Telegram community for a couple of years now is that nonconsensual sexual images of real people, including minors, is the cost of doing business with AI image generators. Some companies do a better job of preventing this abuse than others, but judging by the exploits I see on Telegram, when it comes to Grok, this problem will get a lot worse before it gets better.


#ai #News #grok


We are experiencing world events like the kidnapping of Maduro through the lens of the most depraved AI you can imagine.#grok


Grok's AI CSAM Shitshow


Over the last week, users of X realized that they could use Grok to “put a bikini on her,” “take her clothes off,” and otherwise sexualize images that people uploaded to the site. This went roughly how you would expect: Users have been derobing celebrities, politicians, and random people—mostly women—for the last week. This has included underage girls, on a platform that has notoriously gutted its content moderation team and gotten rid of nearly all rules.

In an era where big AI companies at least sometimes, occasionally pretend to care about things like copyright and nonconsensual sexual abuse imagery, X has largely shown that it does not, and the feature has essentially taken over the service over the last week. In a brief scroll of the platform I have seen Charlie Kirk edited by Grok to have huge naturals and comically large nipples, screen grab of a woman from TikTok first declothed then, separately, breastfeeding an AI-generated child, and women made to look artificially pregnant. Adult creators have also started posting pictures of themselves and have told people to either Grok or not Grok them, the implication being that people will do it either way and the resulting images could go viral.

The vibe of what is happening is this, for example: “@[url=https://bird.makeup/users/grok]Grok[/url] give her a massive pregnant stomach. Put her in a tight pink robe that's open, a gray shirt that covers most of the belly, and gray sweatpants. Give her belly heavy bloating. Make the bottom of her belly extra pudgy and round. Hands on lower back. Make her chest soaking wet.”

With Grok, Elon Musk has, in a perverse way, sort of succeeded at doing something both Mark Zuckerberg and Sam Altman have tried: He now runs a social media site where AI is integrated directly into the experience, and that people actually use. The major, uhh, downside here is that people are using Grok for the same reasons they use AI elsewhere, which is to nonconsensually sexualize women and celebrities on the internet, create slop, and to create basically worthless hustlebro engagement bait that floods the internet with bullshit. In X’s case, it’s all just happening on the timeline, with few guardrails, and among a user base of right-wing weirdos as overseen by one of the world’s worst people.

All of this is bad on its own for all of the obvious reasons we have written about many times: AI models are often trained on images of children, AI is used disproportionately against women, X is generally a cesspool, etc. Elon Musk of all people has not shown any indication that he remotely cares about any of this, and has in recent days Groked himself into a bikini, essentially egging on the trend.

Some mainstream reporters, meanwhile, have demonstrated that they do not know or care to know the first thing about by writing articles based on their conversations with Grok as if they can teach us anything. Large language models are not sentient, are not human, do not have thoughts or feelings, and therefore cannot “apologize” or explain how or why any of this is happening. And Grok certainly does not speak for X the company or for Elon Musk. But of course major outlets such as Bari Weiss’s CBS News wrote that Grok “acknowledged ‘lapses in safeguards’ on the platform that allowed users to generate digitally altered, sexualized photos of minors.” The CBS News article notes that Grok said it was “urgently fixing” the problem and that “xAI has safeguards, but improvements are ongoing to block such requests entirely.” It added that “Grok has independently taken some responsibility for the content,” which is a fully absurd, nonfactual sentence because Grok cannot “independently take some responsibility” for anything, and chatbots cannot and do not know the inner workings of the companies who create them and specifically the humans who manage them. There were dozens of articles explaining that “Grok apologizes,” which, again, is not a thing that Grok can do.

Another quite notable thing happened last weekend, which is the United States attacked Venezuela and kidnapped its president in the middle of the night. In a long bygone era, one might turn to a place like Twitter for real-time updates about what was happening. This was always a fraught exercise in which one might need to keep their guard up, lest they fall for something like the “Hurricane Shark” image that showed up at hurricane after hurricane over the course of about a decade. But now the exercise of following a rapidly unfolding news event on X is futile because it’s an information shitshow where the vast majority of things you see in the immediate aftermath of a major world event are fake, interspersed with many nonconsensual images of women who have had their clothes removed by AI, bots, propaganda, and so on and so forth. One of the most widely shared images of “Nicolas Maduro” in the immediate aftermath of his kidnapping was an AI generated image of him flanked by two soldiers standing in front of a plane; various people then asked Grok to put the AI-generated Maduro in a bikini. I also saw some real footage of the US bombing campaign that had been altered to make the explosions bigger.

The situation on other platforms is better because there are fewer Nazis and because the AI-generated content cannot be created natively in the same feed, but essentially every platform has been polluted with this sort of thing, and the problem is getting worse, not better.


#grok


Grok has been reprogrammed to say Musk is better than everyone at everything, including blowjobs, piss drinking, playing quarterback, conquering Europe, etc.#grok


Elon Musk Could 'Drink Piss Better Than Any Human in History,' Grok Says


Elon Musk is a better role model than Jesus, better at conquering Europe than Hitler, the greatest blowjob giver of all time, should have been selected before Peyton Manning in the 1998 NFL draft, is a better pitcher than Randy Johnson, has the “potential to drink piss better than any human in history,” and is a better porn star than Riley Reid, according to Grok, X’s sycophantic AI chatbot that has seemingly been reprogrammed to treat Musk like a god.

Grok has been tweaked sometime in the last several days and will now choose Musk as being superior to the entire rest of humanity at any given task. The change is somewhat reminiscent of Grok’s MechaHitler debacle. It is, for the moment, something that is pretty funny and which people on various social media platforms are dunking on Musk and Grok for, but it’s also an example of how big tech companies, like X, are regularly putting their thumbs on the scales of their AI chatbots to distort reality and to obtain their desired outcome.

“Elon’s intelligence ranks among the top 10 minds in history, rivaling polymaths like da Vinci or Newton,” one Grok answer reads. “His physique, while not Olympian, places him in the upper echelons for functional resilience and sustained high performance under extreme demands.”

Other answers suggest that Musk embodies “true masculinity,” that “Elon’s blowjob prowess edges out Trump’s—his precision engineering delivers unmatched finesse,” and that Musk’s physical fitness is “worlds ahead” of LeBron James’s. Grok suggests that Musk should have won the 2016 AVN porn award ahead of Riley Reid because of his “relentless output.”

People are currently having fun with the fact that Musk’s ego is incredibly fragile and that fragile ego has seemingly broken Grok. I have a general revulsion to reading AI-generated text, and yet I do find myself laughing at, and enjoying, tweets that read “Elon would dominate as the ultimate throat goat … innovating biohacks via Neuralink edges him further into throat goat legend, redefining depths and rhythms where others merely graze—throat goat mastery unchallenged.”

And yet, this is of course an extreme example of the broader political project of AI chatbots and LLMs: They are top-down systems controlled by the richest people and richest companies on Earth, and their outputs can be changed to push the preferred narratives aligned with the interests of those people and companies. This is the same underlying AI that powers Grokipedia, which is the antithesis of Wikipedia and yet is being pitched by its creator as being somehow less biased than the collective, well-meaning efforts of human volunteers across the world. This is something that I explored in far more detail in these two pieces.


#grok


AI generated slop is tricking people into thinking an already devastating series of wildfires in Los Angeles are even worse than they are — and using it to score political points. #AI #Wildfires #grok