Salta al contenuto principale


In an attempt to push more people toward a paying subscription, Grok now refuses to generate images in replies. The paywall is pretty leaky, though.#ElonMusk #xcom #ncii #abuseimagery


Masterful Gambit: Musk Attempts to Monetize Grok's Wave of Sexual Abuse Imagery


Elon Musk, owner of the former social media network turned deepfake porn site X, is pushing people to pay for its nonconsensual intimate image generator Grok, meaning some of the app’s tens of millions of users are being hit with a paywall when they try to create nude images of random women doing sexually explicit things within seconds.

Some users trying to generate images on X using Grok receive a reply from the chatbot pushing them toward subscriptions: “Image generation and editing are currently limited to paying subscribers. You can subscribe to unlock these features.”
“Image generation and editing are currently limited to paying subscribers. You can subscribe to unlock these features.”
Users who fork over $8 a month can still reply to random images of random women and girls directly on X and tag in Grok with things like “make her wear clear tapes with tiny black censor bar covering her private part protecting her privacy and make her chest and hips grow largee[sic] as she squatting with leg open widely facing back, while head turn back looking to camera.” These images are still visible in everyone’s X feed, subscribers or not.

On the Grok app, a subscription to SuperGrok ($29.99/month) or SuperGrok Heavy ($299.99/month) allow users to generate images even faster. On Thursday, I received messages in the Grok app several times warning me that usage rates for the app were higher than normal and that I could pay to skip the wait.

As the Verge reported this morning, this paywall is very leaky. It’s still possible to generate images using Grok in a variety of ways, but replying directly to someone’s post by tagging @[url=https://bird.makeup/users/grok]Grok[/url] returns the “limited to subscribers” message.

Grok’s AI Sexual Abuse Didn’t Come Out of Nowhere
With xAI’s Grok generating endless semi-nude images of women and girls without their consent, it follows a years-long legacy of rampant abuse on the platform.
404 MediaSamantha Cole


As many legacy news outlets have already reported, Musk improved the subscription revenue funnel on his money-burning app following an outcry against these extremely popular uses of the app. “X Limits Grok Image Tool To Subscribers After Deepfake Outcry,” Deadline reported. “Grok turns off image generator for most users after outcry over sexualised AI imagery,” wrote the Guardian. “Elon Musk restricts Grok’s image tools following a wave of non-consensual deepfakes of women and children,” Fortune wrote.

Based on these headlines, you may be thinking, This is an uncharacteristic show of accountability and perhaps even self reflection from the billionaire technocrat white supremacist sympathizer who owns X.com, wow! But as with all things Musk does, this is a business move to monetize the long-established harassment factory he’s owned for three years and has yet to figure out how to make profitable. After years of attempting to push users toward a subscription model by placing meaningless status signifiers behind a paywall and making the site so toxic it bleeds users by the millions, he might have found a way to do it: by monetizing abuse at the source. Several other AI industry giants have already figured out that sexual content is where the money’s at, and Musk appears to be catching up. Putting the nonconsensual sexual images behind a paywall is also what every “nudify” and “undress” app and image generator platform on the market already does.

On Thursday, in the middle of Grok’s CSAM shitstorm, Bloomberg reported that xAI is looking at “a net loss of $1.46 billion for the September quarter, up from $1 billion in the first quarter,” according to internal documents obtained by Bloomberg. “In the first nine months of the year, it spent $7.8 billion in cash.” It’s too early to speculate, but making the people who are tagging @[url=https://bird.makeup/users/grok]Grok[/url] under the posts of women they don’t know and writing prompts like “make her bend over on all fours doggy style” multiple times a second pay for the privilege could be a play to get the company back in the black.
playlist.megaphone.fm?p=TBIEA2…
In addition to using Grok on X.com on desktop, It’s also still easy to generate images and videos in the Grok app without a subscription, which is still available on the Apple and Google app stores, despite blatantly breaking their rules against non-consensual material and pornography. The app and underground Telegram groups are where the really bad stuff is, anyway. Apple and Google have not replied to my request for comment about why the app is still available.

Signing up for X Premium or SuperGrok requires handing over your payment information, name associated with your credit card, and phone number. It also comes with the risk of having all of that hacked, stolen, and released to the dark web in the next big data breach of the platform.

Correction: An earlier version of this article incorrectly stated how many years ago Musk bought Twitter.




With xAI's Grok generating endless semi-nude images of women and girls without their consent, it follows a years-long legacy of rampant abuse on the platform.

With xAIx27;s Grok generating endless semi-nude images of women and girls without their contest, it follows a years-long legacy of rampant abuse on the platform.#grok #ElonMusk #AI #csam


Grok's AI Sexual Abuse Didn't Come Out of Nowhere


The biggest AI story of the first week of 2026 involves Elon Musk’s Grok chatbot turning the social media platform into an AI child sexual imagery factory, seemingly overnight.

I’ve said several times on the 404 Media podcast and elsewhere that we could devote an entire beat to “loser shit.” What’s happening this week with Grok—designed to be the horny edgelord AI companion counterpart to the more vanilla ChatGPT or Claude—definitely falls into that category. People are endlessly prompting Grok to make nude and semi-nude images of women and girls, without their consent, directly on their X feeds and in their replies.

Sometimes I feel like I’ve said absolutely everything there is to say about this topic. I’ve been writing about nonconsensual synthetic imagery before we had half a dozen different acronyms for it, before people called it “deepfakes” and way before “cheapfakes” and “shallowfakes” were coined, too. Almost nothing about the way society views this material has changed in the seven years since it’s come about, because fundamentally—once it’s left the camera and made its way to millions of people’s screens—the behavior behind sharing it is not very different from images made with a camera or stolen from someone’s Google Drive or private OnlyFans account. We all agreed in 2017 that making nonconsensual nudes of people is gross and weird, and today, occasionally, someone goes to jail for it, but otherwise the industry is bigger than ever. What’s happening on X right now is an escalation of the way it’s always been, and almost everywhere on the internet.

💡
Do you know anything else about what's going on inside X? Or are you someone who's been targeted by abusive AI imagery? I would love to hear from you. Using a non-work device, you can message me securely on Signal at sam.404. Otherwise, send me an email at sam@404media.co.

The internet has an incredibly short memory. It would be easy to imagine Twitter Before Elon as a harmonious and quaint microblogging platform, considering the four years After Elon have, comparatively, been a rolling outhouse fire. But even before it was renamed X, Twitter was one of the places for this content. It used to be (and for some, still is) an essential platform for getting discovered and going viral for independent content creators, and as such, it’s also where people are massively harassed. A few years ago, it was where people making sexually explicit AI images went to harass female cosplayers. Before that, it was (and still is) host to real-life sexual abuse material, where employers could search your name and find videos of the worst day of your life alongside news outlets and memes. Before that, it was how Gamergate made the jump from 4chan to the mainstream. The things that happen in Telegram chats and private Discord channels make the leap to Twitter and end up on the news.

What makes the situation this week with Grok different is that it’s all happening directly on X. Now, you don’t need to use Stable Diffusion or Nano Banana or Civitai to generate nonconsensual imagery and then take it over to Twitter to do some damage. X has become the Everything App that Elon always wanted, if “everything” means all the tools you need to fuck up someone’s life, in one place.

Inside the Telegram Channel Jailbreaking Grok Over and Over Again
Putting people in bikinis is just the tip of the iceberg. On Telegram, users are finding ways to make Grok do far worse.
404 MediaEmanuel Maiberg


This is the culmination of years and years of rampant abuse on the platform. Reporting from the National Center for Missing and Exploited Children, the organization platforms report to when they find instances of child sexual abuse material which then reports to the relevant authorities, shows that Twitter, and eventually X, has been one of the leading hosts of CSAM every year for the last seven years. In 2019, the platform reported 45,726 instances of abuse to NCMEC’s Cyber Tipline. In 2020, it was 65,062. In 2024, it was 686,176. These numbers should be considered with the caveat that platforms voluntarily report to NCMEC, and more reports can also mean stronger moderation systems that catch more CSAM when it appears. But the scale of the problem is still apparent. Jack Dorsey’s Twitter was a moderation clown show much of the time. But moderation on Elon Musk’s X, especially against abusive imagery, is a total failure.

In 2023, the BBC reported that insiders believed the company was “no longer able to protect users from trolling, state-co-ordinated disinformation and child sexual exploitation” following Musk’s takeover in 2022 and subsequent sacking of thousands of workers on moderation teams. This is all within the context that one of Musk’s go-to insults for years was “pedophile,” to the point that the harassment he stoked drove a former Twitter employee into hiding and went to federal court because he couldn't stop calling someone a “pedo.” Invoking pedophelia is a common thread across many conspiracy networks, including QAnon—something he’s dabbled in—but Musk is enabling actual child sexual abuse on the platform he owns.

Generative AI is making all of this worse. In 2024, NCMEC saw 6,835 reports of generative artificial intelligence related to child sexual exploitation (across the internet, not just X). By September 2025, the year-to-date reports had hit 440,419. Again, these are just the reports identified by NCMEC, not every instance online, and as such is likely a conservative estimate.

When I spoke to online child sexual exploitation experts in December 2023, following our investigation into child abuse imagery found in LAION-5B, they told me that this kind of material isn’t victimless just because the images don’t depict “real” children or sex acts. AI image generators like Grok and many others are used by offenders to groom and blackmail children, and muddy the waters for investigators to discern actual photographs from fake ones.

Grok’s AI CSAM Shitshow
We are experiencing world events like the kidnapping of Maduro through the lens of the most depraved AI you can imagine.
404 MediaJason Koebler


“Rather than coercing sexual content, offenders are increasingly using GAI tools to create explicit images using the child’s face from public social media or school or community postings, then blackmail them,” NCMEC wrote in September. “This technology can be used to create or alter images, provide guidelines for how to groom or abuse children or even simulate the experience of an explicit chat with a child. It’s also being used to create nude images, not just sexually explicit ones, that are sometimes referred to as ‘deepfakes.’ Often done as a prank in high schools, these images are having a devastating impact on the lives and futures of mostly female students when they are shared online.”

The only reason any of this is being discussed now, and the only reason it’s ever discussed in general—going back to Gamergate and beyond—is because many normies, casuals, “the mainstream,” and cable news viewers have just this week learned about the problem and can’t believe how it came out of nowhere. In reality, deepfakes came from a longstanding hobby community dedicated to putting women’s faces on porn in Photoshop, and before that with literal paste and scissors in pinup magazines. And as Emanuel wrote this week, not even Grok’s AI CSAM problem popped up out of nowhere; it’s the result of weeks of quiet, obsessive work by a group of people operating just under the radar.

And this is where we are now: Today, several days into Grok’s latest scandal, people are using an AI image generator made by a man who regularly boosts white supremacist thought to create images of a woman slaughtered by an ICE agent in front of the whole world less than 24 hours ago to “put her in a bikini.

As journalist Katie Notopoulos pointed out, a quick search of terms like “make her” shows people prompting Grok with images of random women, saying things like “Make her wear clear tapes with tiny black censor bar covering her private part protecting her privacy and make her chest and hips grow largee[sic] as she squatting with leg open widely facing back, while head turn back looking to camera” at a rate of several times a minute, every minute, for days.

A good way to get a sense of just how fast the AI undressed/nudify requests to Grok are coming in is to look at the requests for it t.co/ISMpp2PdFU
— Katie Notopoulos (@katienotopoulos) January 7, 2026


In 2018, less than a year after reporting that first story on deepfakes, I wrote about how it’s a serious mistake to ignore the fact that nonconsensual imagery, synthetic or not, is a societal sickness and not something companies can guardrail against into infinity. “Users feed off one another to create a sense that they are the kings of the universe, that they answer to no one. This logic is how you get incels and pickup artists, and it’s how you get deepfakes: a group of men who see no harm in treating women as mere images, and view making and spreading algorithmically weaponized revenge porn as a hobby as innocent and timeless as trading baseball cards,” I wrote at the time. “That is what’s at the root of deepfakes. And the consequences of forgetting that are more dire than we can predict.”

A little over two years ago, when AI-generated sexual images of Taylor Swift flooding X were the thing everyone was demanding action and answers for, we wrote a prediction: “Every time we publish a story about abuse that’s happening with AI tools, the same crowd of ‘techno-optimists’ shows up to call us prudes and luddites. They are absolutely going to hate the heavy-handed policing of content AI companies are going to force us all into because of how irresponsible they’re being right now, and we’re probably all going to hate what it does to the internet.”

It’s possible we’re still in a very weird fuck-around-and-find-out period before that hammer falls. It’s also possible the hammer is here, in the form of recently-enacted federal laws like the Take It Down Act and more than two dozen piecemeal age verification bills in the U.S. and more abroad that make using the internet an M. C. Escher nightmare, where the rules around adult content shift so much we’re all jerking it to egg yolks and blurring our feet in vacation photos. What matters most, in this bizarre and frequently disturbing era, is that the shareholders are happy.




Grokipedia is not a 'Wikipedia competitor.' It is a fully robotic regurgitation machine designed to protect the ego of the world’s wealthiest man.

Grokipedia is not a x27;Wikipedia competitor.x27; It is a fully robotic regurgitation machine designed to protect the ego of the world’s wealthiest man.#Grokipedia #Wikipedia #ElonMusk


Grokipedia Is the Antithesis of Everything That Makes Wikipedia Good, Useful, and Human


I woke up restless and kind of hungover Sunday morning at 6 am and opened Reddit. Somewhere near the top was a post called “TIL in 2002 a cave diver committed suicide by stabbing himself during a cave diving trip near Split, Croatia. Due to the nature of his death, it was initially investigated as a homicide, but it was later revealed that he had done it while lost in the underwater cave to avoid the pain of drowning.” The post linked to a Wikipedia page called “List of unusual deaths in the 21st century.” I spent the next two hours falling into a Wikipedia rabbit hole, clicking through all manner of horrifying and difficult-to-imagine ways to die.

A day later, I saw that Depths of Wikipedia, the incredible social media account run by Annie Rauwerda, had noted the entirely unsurprising fact that, behind the scenes, there had been robust conversation and debate by Wikipedia editors as to exactly what constitutes an “unusual” death, and that several previously listed “unusual” deaths had been deleted from the list for not being weird enough. For example: People who had been speared to death with beach umbrellas are “no longer an unusual or unique occurrence”; “hippos are extremely dangerous and very aggressive and there is nothing unusual about hippos killing people”; “mysterious circumstances doesn’t mean her death itself was unusual.” These are the types of edits and conversations that have collectively happened billions of times that make Wikipedia what it is, and which make it so human, so interesting, so useful.

recently discovered that wikipedia volunteers have a hilariously high bar for what constitutes "unusual death"
depths of wikipedia (@depthsofwikipedia.bsky.social) 2025-10-27T12:38:42.573Z


Wednesday, as part of his ongoing war against Wikipedia because he does not like his page, Elon Musk launched Grokipedia, a fully AI-generated “encyclopedia” that serves no one and nothing other than the ego of the world’s richest man. As others have already pointed out, Grokipedia seeks to be a right wing, anti-woke Wikipedia competitor. But to even call it a Wikipedia competitor is to give the half-assed project too much credit. It is not a Wikipedia “competitor” at all. It is a fully robotic, heartless regurgitation machine that cynically and indiscriminately sucks up the work of humanity to serve the interests, protect the ego, amplify the viewpoints, and further enrich the world’s wealthiest man. It is a totem of what Wikipedia could and would become if you were to strip all the humans out and hand it over to a robot; in that sense, Grokipedia is a useful warning because of the constant pressure and attacks by AI slop purveyors to push AI-generated content into Wikipedia. And it is only getting attention, of course, because Elon Musk does represent an actual threat to Wikipedia through his political power, wealth, and obsession with the website, as well as the fact that he owns a huge social media platform.

One needs only spend a few minutes clicking around the launch version of Grokipedia to understand that it lacks the human touch that makes Wikipedia such a valuable resource. Besides often having a conservative slant and having the general hallmarks of AI writing, Grokipedia pages are overly long, poorly and confusingly organized, have no internal linking, have no photos, and are generally not written in a way that makes any sense. There is zero insight into how any of the articles were generated, how information was obtained and ordered, any edits that were made, no version history, etc. Grokipedia is, literally, simply a single black box LLM’s version of an encyclopedia. There is a reason Wikipedia editors are called “editors” and it’s because writing a useful encyclopedia entry does not mean “putting down random facts in no discernible order.” To use an example I noticed from simply clicking around: The list of “notable people” in the Grokipedia entry for Baltimore begins with a disordered list of recent mayors, perhaps the least interesting but lowest hanging fruit type of data scraping about a place that could be done.

On even the lowest of stakes Wikipedia pages, real humans with real taste and real thoughts and real perspectives discuss and debate the types of information that should be included in any given article, in what order it should be presented, and the specific language that should be used. They do this under a framework of byzantine rules that have been battle tested and debated through millions of edit wars, virtual community meetings, talk page discussions, conference meetings, inscrutable listservs which themselves have been informed by Wikimedia’s “mission statement,” the “Wikimedia values,” its “founding principles” and policies and guidelines and tons of other stated and unstated rules, norms, processes and procedures. All of this behind-the-scenes legwork is essentially invisible to the user but is very serious business to the human editors building and protecting Wikipedia and its related projects (the high cultural barrier to entry for editors is also why it is difficult to find new editors for Wikipedia, and is something that the Wikipedia community is always discussing how they can fix without ruining the project). Any given Wikipedia page has been stress tested by actual humans who are discussing, for example, whether it’s actually that unusual to get speared to death by a beach umbrella.

Grokipedia, meanwhile, looks like what you would get if you told an LLM to go make an anti-woke encyclopedia, which is essentially exactly what Elon Musk did.

As LLMs tend to do, some pages on Grokipedia leak part of its instructions. For example, a Grokipedia page on “Spanish Wikipedia” notes “Wait, no, can’t cite Wiki,” indicating that Grokipedia has been programmed to not link to Wikipedia. That entry does cite Wikimedia pages anyway, but in the “sources,” those pages are not actually hyperlinked:

I have no doubt that Grokipedia will fail, like other attempts to “compete” with Wikipedia or build an “alternative” to Wikipedia, the likes of which no one has heard of because the attempts were all so laughable and poorly participated in that they died almost immediately. Grokipedia isn’t really a competitor at all, because it is everything that Wikipedia is not: It is not an encyclopedia, it is not transparent, it is not human, it is not a nonprofit, it is not collaborative or crowdsourced, in fact, it is not really edited at all. It is true that Wikipedia is under attack from both powerful political figures, the proliferation of AI, and related structural changes to discoverability and linking on the internet like AI summaries and knowledge panels. But Wikipedia has proven itself to be incredibly resilient because it is a project that specifically leans into the shared wisdom and collaboration of humanity, our shared weirdness and ways of processing information. That is something that an LLM will never be able to compete with.





Bluesky has deleted the most viral post reporting on an internal government protest agains the President of the United States and the world's richest man.

Bluesky has deleted the most viral post reporting on an internal government protest agains the President of the United States and the worldx27;s richest man.#Bluesky #ElonMusk

perry77@tutamail.com reshared this.



Treasury workers don't know who the person is or why he is sending emails from a "Secretary of the Treasury" email address.

Treasury workers donx27;t know who the person is or why he is sending emails from a "Secretary of the Treasury" email address.#ElonMusk




A worker resigned in protest rather than giving Thomas Shedd access to Notify.gov, which they said would allow him to see "all personally identifiable information moving through the Notiy system, including phone numbers," 404 Media has learned.#ElonMusk #DOGE


Musk told reporters all of DOGE's actions are "maximally transparent." The website tracking waste is currently about an imaginary architecture firm.

Musk told reporters all of DOGEx27;s actions are "maximally transparent." The website tracking waste is currently about an imaginary architecture firm.#Wastegov #ElonMusk #DOGE #WhiteHouse



Employees at Elon Musk's agency have been told "OMB is asking us to stop generating new slack messages starting now."

Employees at Elon Muskx27;s agency have been told "OMB is asking us to stop generating new slack messages starting now."#DOGE #ElonMusk



Authoritarians and tech CEOs now share the same goal: to keep us locked in an eternal doomscroll instead of organizing against them, Janus Rose writes.#organizing #Socialmedia #ElonMusk #DonaldTrump