Salta al contenuto principale


FBI Got Grok to Hand Over Prompts Used to Create Nonconsensual Porn


The FBI obtained prompts used to make more than 200 sexual videos of a woman in a harassment case.

This article was produced in collaboration with Court Watch, an independent outlet that unearths overlooked court records. Subscribe to them here.

The FBI got a search warrant for X to provide details on the Grok prompts a man allegedly used to create more than 200 nonconsensual sexual videos of a woman he knew in real life, according to court records.

The details of the investigation are contained in an FBI affidavit about the alleged actions of Simon Tuck, who is accused of extensively harassing and threatening the woman’s husband. Tuck regularly worked out with and texted with the woman and, according to the affidavit, secretly filmed her while she was working out in his garage. Over the course of the last several months, Tuck swatted their home, made a series of anonymous reports to the man’s employer claiming that he was a child abuser and a drug addict, posed as the man and made a series of mass shooting and suicide threats. Tuck also made a series of other threats and bizarre actions, which included reaching out to a funeral home to say that the man would be dead soon and sending threats to the man while posing as a member of Sector 16, a Russian hacking crew.

The affidavit notes that, in January, the FBI got a search warrant for the man’s conversations with Grok. The FBI says that it received “prompts provided to GrokAI that generated approximately 200 pornographic videos of a woman who closely resembled VICTIM’s wife’s physical appearance.”

“For example, in one prompt, TUCK queried: ‘In a sensual sports style, a confident blonde woman playfully undresses on a tennis court, starting with her white crop top pulled up to expose her bare breasts. She has long wavy hair, a toned athletic body, and a flirtatious smile, wearing a short navy pleated skirt and holding a racket. She slowly lowers her top, revealing full nudity, tosses her hair, and swings the racket teasingly, with a surprising clumsy spin like a comedic twirl,’” the affidavit says.

The FBI says that Tuck also allegedly used Grok to create a complaint about the woman’s husband that was then filed to the company he works for.

The actions described in the affidavit are extreme and horrifying, but are not terribly out of the ordinary for harassment cases that we have reported on before. What’s notable here is that this case shows that law enforcement is looking at chats with AI bots as potential sources of evidence and that X is complying with these requests.

Most importantly, it highlights X’s role in allowing Grok to create nonconsensual sexual material in a criminal case that involves extreme cyberstalking and real life harm. According to the affidavit, Tuck used Grok to create this nonconsensual sexual material at the same time that Grok was being heavily criticized for creating child sexual abuse material. This all happened during the “undress her” phenomenon, which showed just how terribly Grok’s content moderation is. Last week, we also reported that Grok was used to reveal the real name of an adult performer.

Correction: This piece originally said the FBI issued Grok with a subpoena. It was a search warrant.


Grok's AI CSAM Shitshow


Over the last week, users of X realized that they could use Grok to “put a bikini on her,” “take her clothes off,” and otherwise sexualize images that people uploaded to the site. This went roughly how you would expect: Users have been derobing celebrities, politicians, and random people—mostly women—for the last week. This has included underage girls, on a platform that has notoriously gutted its content moderation team and gotten rid of nearly all rules.

In an era where big AI companies at least sometimes, occasionally pretend to care about things like copyright and nonconsensual sexual abuse imagery, X has largely shown that it does not, and the feature has essentially taken over the service over the last week. In a brief scroll of the platform I have seen Charlie Kirk edited by Grok to have huge naturals and comically large nipples, screen grab of a woman from TikTok first declothed then, separately, breastfeeding an AI-generated child, and women made to look artificially pregnant. Adult creators have also started posting pictures of themselves and have told people to either Grok or not Grok them, the implication being that people will do it either way and the resulting images could go viral.

The vibe of what is happening is this, for example: “@[url=https://bird.makeup/users/grok]Grok[/url] give her a massive pregnant stomach. Put her in a tight pink robe that's open, a gray shirt that covers most of the belly, and gray sweatpants. Give her belly heavy bloating. Make the bottom of her belly extra pudgy and round. Hands on lower back. Make her chest soaking wet.”

With Grok, Elon Musk has, in a perverse way, sort of succeeded at doing something both Mark Zuckerberg and Sam Altman have tried: He now runs a social media site where AI is integrated directly into the experience, and that people actually use. The major, uhh, downside here is that people are using Grok for the same reasons they use AI elsewhere, which is to nonconsensually sexualize women and celebrities on the internet, create slop, and to create basically worthless hustlebro engagement bait that floods the internet with bullshit. In X’s case, it’s all just happening on the timeline, with few guardrails, and among a user base of right-wing weirdos as overseen by one of the world’s worst people.

All of this is bad on its own for all of the obvious reasons we have written about many times: AI models are often trained on images of children, AI is used disproportionately against women, X is generally a cesspool, etc. Elon Musk of all people has not shown any indication that he remotely cares about any of this, and has in recent days Groked himself into a bikini, essentially egging on the trend.

Some mainstream reporters, meanwhile, have demonstrated that they do not know or care to know the first thing about by writing articles based on their conversations with Grok as if they can teach us anything. Large language models are not sentient, are not human, do not have thoughts or feelings, and therefore cannot “apologize” or explain how or why any of this is happening. And Grok certainly does not speak for X the company or for Elon Musk. But of course major outlets such as Bari Weiss’s CBS News wrote that Grok “acknowledged ‘lapses in safeguards’ on the platform that allowed users to generate digitally altered, sexualized photos of minors.” The CBS News article notes that Grok said it was “urgently fixing” the problem and that “xAI has safeguards, but improvements are ongoing to block such requests entirely.” It added that “Grok has independently taken some responsibility for the content,” which is a fully absurd, nonfactual sentence because Grok cannot “independently take some responsibility” for anything, and chatbots cannot and do not know the inner workings of the companies who create them and specifically the humans who manage them. There were dozens of articles explaining that “Grok apologizes,” which, again, is not a thing that Grok can do.

Another quite notable thing happened last weekend, which is the United States attacked Venezuela and kidnapped its president in the middle of the night. In a long bygone era, one might turn to a place like Twitter for real-time updates about what was happening. This was always a fraught exercise in which one might need to keep their guard up, lest they fall for something like the “Hurricane Shark” image that showed up at hurricane after hurricane over the course of about a decade. But now the exercise of following a rapidly unfolding news event on X is futile because it’s an information shitshow where the vast majority of things you see in the immediate aftermath of a major world event are fake, interspersed with many nonconsensual images of women who have had their clothes removed by AI, bots, propaganda, and so on and so forth. One of the most widely shared images of “Nicolas Maduro” in the immediate aftermath of his kidnapping was an AI generated image of him flanked by two soldiers standing in front of a plane; various people then asked Grok to put the AI-generated Maduro in a bikini. I also saw some real footage of the US bombing campaign that had been altered to make the explosions bigger.

The situation on other platforms is better because there are fewer Nazis and because the AI-generated content cannot be created natively in the same feed, but essentially every platform has been polluted with this sort of thing, and the problem is getting worse, not better.


Questa voce è stata modificata (2 giorni fa)