“We just want to take down posts about people who are being defamed," the company's founder said. “And when I say defamed, it means like, ‘this guy has a small penis,’ or ‘this guy smells.’"
“We just want to take down posts about people who are being defamed," the companyx27;s founder said. “And when I say defamed, it means like, ‘this guy has a small penis,’ or ‘this guy smells.’"#News #tea
Company Helps Men Scrub Negative Posts About Them from Tea App
Tea App Green Flags, a service that claims it can “protect your digital reputation,” will remove negative posts about men from private online groups where women share “red flags” about men they’ve dated in order to help other women.The service is another escalation in the age of online dating, women attempting to protect each other from other men in the dating pool, and instances of men fighting against those efforts. It also shows how some of these allegedly private women’s groups, especially the Tea app, are regularly infiltrated and manipulated by men.
When I reached out to an email listed on Tea App Green Flags’s site, I got a call from a man behind the operation who identified only as Jay. He said he started the service about two years ago, and that he initially focused on the Are We Dating the Same Guy Facebook groups. For the past year, he’s been offering services specifically for the Tea app, a “dating safety” app for women that suffered a devastating breach last year, and which my investigation revealed, was founded by a man who wanted to monetize the Are We Dating the Same guy phenomenon. The site also claims it can remove posts from Tea app copycat for men TeaOnHer, as well as posts on Instagram.
Jay declined to say how much revenue the site generates, but claims he gets about 50 to 60 calls a day and currently has six employees. On its website, Tea App Green Flags claims it has removed more than 2,500 posts on the Tea app for 759 clients. Jay said that most of his clients are men, but that some are women who are trying to take down posts about their husbands or boyfriends.
Potential clients can pay $1.99 to report one account and up to $79.99 to report 25 accounts.
“We just want to take down posts about people who are being defamed,” Jay told me. “And when I say defamed, it means like, ‘this guy has a small penis,’ or ‘this guy smells.’ That doesn't fit the mission statement of what the Tea app was for, which is to warn women against people who are harmful, who are abusive, who are cheaters. We've noticed that a lot of the individuals that come to us, almost all of them, come to us for little stupid things.”
Clients interested in Tea App Green Flags’s services go to the site and fill out a form with their information and information about the posts they want removed. The company reviews the case and then starts the “takedown process,” which can take between 21-30 days. Tea App Green Flags says it will then continue to monitor posts about the client and remove them for three months.
💡
Were you impacted by the Tea hack? I would love to hear from you. Using a non-work device, you can message me securely on Signal at @emanuel.404. Otherwise, send me an email at emanuel@404media.co.When I asked Jay how this “takedown process” works he said “I can’t give that info. That’s the business.”
Jay told me that he would not work with clients who have been accused of sexual assault by multiple people on the Tea app, or by one person in one of the Are We Dating the Same Guy Facebook groups who used their real name and face in a profile picture.
“Sometimes we find along the process that there are pedophiles or people who actually did what they did, and they're very bad,” Jay said. “So we say, we're not doing this. We can't take a rap for that. We're ethical. We just want to take down people who are being defamed.”
Jay told me he understands why Facebook groups like Are We Dating the Same Guy are necessary and thinks they are a good idea, but the anonymous nature of the Tea app "causes a cesspool of defamation.”
When I asked Jay what he thinks about the fact that some women don’t feel safe sharing information about some dangerous men unless they can do so anonymously, he said it would be better if women showed their face, or if the Tea app at least gave women that option.
“I have a Tea app account. I'm a dude. All my reps have Tea app accounts. They're men,” Jay said. “How much can you trust these people and what they're doing?”
One reason the Tea app hack was so dangerous is because the app used to ask women to upload a picture of their face in order to verify that they are women. Those images were posted all over the internet because of the hack, putting those women at risk and leading to more harassment.
Tea App Green Flags is far from the first attempt from men trying to fight back against these types of groups. In 2024, for example, we wrote about a man who tried to sue women who posted about him in Are We Dating the Same Guy Facebook groups. His first case was dismissed, and he refiled days later as a class action lawsuit; later that year, he was sent to prison for tax fraud.
Tea did not immediately respond to a request for comment.
'Are We Dating the Same Guy' Guy Imprisoned for Tax Fraud
After suing 27 women and multiple platforms because people he'd dated warned others not to date him, he filed a class action lawsuit against them. Now, he's headed to federal prison.Samantha Cole (404 Media)
The group is talking about Epstein and filming propaganda videos in Roblox as a form of 'digital Jihad,' researchers say.
The group is talking about Epstein and filming propaganda videos in Roblox as a form of x27;digital Jihad,x27; researchers say.#News
The Islamic State Is Using AI to Resurrect Dead Leaders and Platforms Are Failing to Moderate It
The Islamic State’s online warriors are still posting. It’s been almost a decade since the group lost the Battle of Raqqa and saw its IRL territorial ambitions thwarted. Unable to hold territory in the real world, the group renewed its focus on posting and has started using AI to resurrect dead leaders. And, because social media platforms have gutted their content moderation operations, the terror group’s strategy is working.The Islamic State’s online success is detailed in a new report from the Institute for Strategic Dialogue (ISD), an independent research institution that studies extremist movements. For the study, researchers tracked IS accounts on Facebook, TikTok, Instagram, WhatsApp, Telegram, Element, and SimpleX. It found videos posted in Discord channels dedicated to video games and tracked how the groups have modified old content to fit on new platforms.
playlist.megaphone.fm?p=TBIEA2…
Like many others posting online in 2026, the Islamic State has found success by talking about the Epstein Files, using AI to create new videos of dead leaders, and has begun taking its message to video games like Minecraft and Roblox.“They are very adept at exploiting platforms [and] spreading messages,” Moustafa Ayad, a researcher at ISD and author of this study, told 404 Media. He noted that the group has been active online for 10 years and that part of their success is a willingness to experiment.
Ayad said that Facebook remains a central hub for IS, despite its push into new spaces. His research discovered 350 IS accounts on Facebook that generated tens of thousands of views. One video of an IS fighter talking to camera had more than 77,000 views and 101 shares. The Islamic State branding is blurred to defeat the site’s auto-moderation.
According to Ayad, Islamic State’s engagement numbers are up across the board. “Trust and safety teams have been rolled back over the past few years…a lot of this is outsourced to third party companies who aren’t necessarily experts in understanding if a piece of content came from the Islamic State,” he said.
Social media companies like Meta used the election of Donald Trump as an excuse to cut back on moderating their platforms. Meta said this would mean “more speech and fewer mistakes.” No policies around terrorism have changed, but broadly speaking the largest social media platforms are doing a worse job at moderating their sites. In practice it’s turned Facebook into a place where a group like the Islamic State can spread its message without falling afoul of content moderation teams. Even three years ago, IS influencers wouldn’t have lasted long on the site.
This rollback of moderation has coincided with a spike in views for IS accounts, the report argues. “Individual IS ‘influencer’ accounts are experiencing higher engagement rates on terrorist content than previously recorded by ISD analysts,” the report said. “It is unclear if this uptick is due to moderation gaps, platform mechanics or specific tactical adjustments by IS supporters and support outlets and groups.”
“We’re not talking about content where there’s a gray area,” Ayad said. “It’s very clearly branded Islamic State…supports violence, supports the killing of minorities, the celebration of bombings, the pillaging that is happening in Sub Saharan Africa.”
Something new is the adoption of AI systems to resurrect dead leaders. Ayad described a video where the deceased IS leader Abu Bakr al-Baghdadi delivered speeches again.
“It’s a sanctioned version of using AI for a ‘beloved leader’ or taking him out of context and placing him in a meadow, surrounded by beautiful flowers, paying homage,” he said. “Some of these circles are strange.”
Another popular topic in current IS propaganda is the Epstein Files. According to Ayad, an AI-generated photo of Donald Trump and Bill Clinton canoodling in bed makes frequent appearances on IS accounts across platforms. The picture is, supposedly, pulled from the Epstein files but it’s a popular fake. Ayad said Epstein has been a perfect springboard for IS to talk about “western degeneracy.”
Ayad has also seen Islamic State videos created using Minecraft and Roblox. “They’re creating these virtual worlds that mimic the Islamic State’s caliphate, literally calling it something like Wilayat Roblox [the Province of Roblox] … and they’ll completely mimic the video styles of well-known Islamic State Videos using Roblox characters. This includes faux executions. It includes Arabic and English voiceover in the same cadence as an Islamic State narrator.”
One of the most famous pieces of Islamic State propaganda is a film called Flames of War: The Fighting Has Just Begun. Ayad has seen multiple 1 for 1 recreations of the film using Roblox characters. “They’re often tied to Discords where a number of users are creating this content. They always claim it’s fake or a LARP,” he said. “To see them in this video game skin is odd, to say the least.”
What drives an Islamic State poster? “It’s done very much for the love of the game,” Ayad said. It’s done for the fact that, as a user, ‘I might not be able to participate in physical Jihad but I can participate in electronic Jihad.’”
Keeping Islamic State off of major social media platforms is a constant battle, but one frustrating finding of the study is that the tactics for avoiding moderation haven’t changed much.
“Techniques included the use of alternative news outlets to rebrand IS news, as well as purchasing or hijacking channels with high subscriber bases. These were then repurposed to share IS content. IS supporters, groups and outlets also use coded language: they sometimes referred to the group as ‘black hole’ or the ‘righteous few’ to confound moderation efforts.”
To fight back against IS online, Ayad said that platforms needed to be better at coordination. Often a group is kicked off of Facebook so it moves to TikTok or another platform where it flourishes. He also said that all the companies need to be more transparent about who they’re kicking off their platform and why.
“Europol does these big takedown days and they’re effective to a certain degree but the fact of the matter is that the Islamic State is spread across an expanse of different platforms and messaging applications,” he said. “They’re able to shift operations to another place, wait it out and regenerate on that platform…it’s not like you’re dealing with an average user, you’re dealing with a user that’s determined to spread their ideology and exploit your platform to their own ends.”
And then there’s the old problem of language. “There needs to just be better moderation of under-moderated languages,” Ayad said. Facebook and other platforms have long been terrible at moderating non-English languages. A lot of rancid content online gets a pass because it’s in Arabic or Bengali.
More Speech and Fewer Mistakes
We're ending our third party fact checking program and moving to a Community Notes model.Meta Newsroom (Meta)
In the latest in a string of privacy abuses from the chatbot, Grok provided porn performer Siri Dahl's full legal name and birthdate to the public, information she'd protected until now.
In the latest in a string of privacy abuses from the chatbot, Grok provided porn performer Siri Dahlx27;s full legal name and birthdate to the public, information shex27;d protected until now.#grok #xai #x #AI #chatbots
Grok Exposed a Porn Performer’s Legal Name and Birthdate—Without Even Being Asked
Porn performer Siri Dahl’s personal information, including her full legal name and birthday, was publicly exposed earlier this month by xAI’s Grok chatbot. Almost instantly, harassers started opening Facebook accounts in her name and posting stolen porn clips with her real name on sites for leaking OnlyFans content.Dahl has used the name — a nod to her Scandinavian heritage — since the beginning of her career in the adult industry in 2012. Now, Grok is revealing her legal name and all personal information it can find to whoever happens to ask.
Dahl told 404 Media she wanted to reclaim the situation, and her name, and asked that it be published in this piece as part of that goal.
Dahl first noticed this happening last week, she told 404 Media, after a clip of the performer from a porn scene was making its rounds on X. The scene was incorrectly labelled, so someone on X replied, “Who is she? What is her name?” and tagged @[url=https://bird.makeup/users/grok]Grok[/url] to get an answer.
Grok answered, “she appears to be Siri Dahl, an American adult film actress born on June 20, 1988. Her real name is Adrienne Esther Manlove.” Grok provided her personal information unprompted; the user likely only wanted information on what performer appeared in the clip.
This is the latest in a series of abuses inflicted by Grok, xAI, and its users. At the end of 2025, people used Grok to produce thousands of images of nonconsensual sexual content, including images depicting children. The problem was so widespread that the UK’s Ofcom and several attorneys general launched or demanded investigations into X and Grok, and police raided X’s offices in France as part of an investigation into child sexual abuse material on the platform.
X strictly prohibits sharing other people’s personal information without their consent. “Sharing someone’s private information online without their permission, sometimes called ‘doxxing,’ is a breach of their privacy and can pose serious safety and security risks for those affected,” the platform’s terms of use state. But X’s own chatbot is doing it anyway.
Screenshot via X
While there have been some close calls, up until now Dahl had managed to keep her personal information private. “I've been paying for data removal services for like, at least six years now,” Dahl said. She said she’s spent “easily” thousands of dollars on those services, which promise to delete personal and potentially dangerous information as it comes up.Grok is trained on X users’ posts, as well as data scraped from the wider internet. X’s website says “Grok was pre-trained by xAI on a variety of data from publicly available sources and data sets reviewed and curated by AI Tutors who are human reviewers.” Dahl said she doesn’t know where Grok originally got her legal name from. But now that it’s part of the system’s internal dataset, she feels like there’s no coming back; her days of pseudonymity are over.
‘The Most Dejected I’ve Ever Felt:’ Harassers Made Nude AI Images of Her, Then Started an OnlyFans
Kylie Brewer isn’t unaccustomed to harassment online. But when people started using Grok-generated nudes of her on an OnlyFans account, it reached another level.404 MediaSamantha Cole
“Now that it's been crawled, it's everywhere. There are a ton of Facebook accounts that come up that are pretending to be me, using my real name,” Dahl said. “There are now porn leak sites that are posting porn of me using only my legal name, not even putting my stage name on it.”Users are now asking Grok for the make and model of Dahl’s car, her address, and other dangerous personal information. While it hasn’t been able to accurately reply yet, she worries it’s only a matter of time.
But Dahl isn’t the only person affected by the fallout.
“I do everything that I can reasonably within my power to keep my legal name private, and my main motivation for doing that is to reduce any chance of my family getting harassed,” she said. “It's really common for people to look up private information, get parents' phone numbers and start calling and harassing the parents, things like that. I've been able to keep my family safe from that kind of thing for years.”
Now, Dahl is having to call her family and put defensive plans in place.
In violating Dahl’s right to privacy, X’s Grok has destroyed Dahl’s ability to protect herself and her family online. Doxing her is not providing value to X users, as is ostensibly Grok’s goal. The original inquiry only wanted to know how to find more of her work, to which her stage name was the most useful answer.
“What would the motivation be for anyone to want to know my personal information, other than to harass and cause harm?” Dahl said.
In this ongoing discussion on “internet safety,” it is important to pay attention to who is being protected. Certainly not the users; the marginalized workers, or the young women. Not Dahl, or her family.
While the right to privacy online continues to be debated, it’s important to remember that privacy exists not only for bad-actors and shady characters. Historically, marginalized populations benefit from internet anonymity the most.
X did not respond to a request for comment.
X offices raided in France as UK opens fresh investigation into Grok
Elon Musk's X and Grok platforms are facing increased scrutiny from authorities on both sides of the channel.Liv McMahon (BBC News)
Ring's CEO told staff the feature is “first for finding dogs,” indicating a plan to expand.
Ringx27;s CEO told staff the feature is “first for finding dogs,” indicating a plan to expand.#Ring
Leaked Email Suggests Ring Plans to Expand ‘Search Party’ Surveillance Beyond Dogs
Ring’s controversial, AI-powered “Search Party” feature isn’t intended to always be limited only to dogs, the company’s founder, Jamie Siminoff, told Ring employees in an internal email obtained by 404 Media.In October, Ring launched Search Party, an on-by-default feature that links together Ring cameras in a neighborhood and uses AI to search for specific lost dogs, essentially creating a networked, automated surveillance system. The feature got some attention at the time, but faced extreme backlash after Ring and Siminoff promoted Search Party during a Super Bowl ad. 404 Media obtained an email that Siminoff sent to all Ring employees in early October, soon after the feature’s launch, which said the feature was introduced “first for finding dogs,” but that it or features like it would be expanded to “zero out crime in neighborhoods.”
“This is by far the most innovation that we have launched in the history of Ring. And it is not only the quantity, but quality,” Siminoff wrote. “I believe that the foundation we created with Search Party, first for finding dogs, will end up becoming one of the most important pieces of tech and innovation to truly unlock the impact of our mission. You can now see a future where we are able to zero out crime in neighborhoods. So many things to do to get there but for the first time ever we have the chance to fully complete what we started.”
“It is exciting to be back to Day 1, we are going to have to work hard and leverage everything we can, especially AI,” he continued. “Thanks again to everyone who came together to make this week happen and I can’t wait to show everyone else all the exciting things we are building over the years to come!”
youtube.com/embed/OheUzrXsKrY?…
As we wrote last week, Siminoff made Ring popular by signing partnership deals with police departments around the country. The company briefly stepped away from those partnerships after Siminoff left the company in 2023, but when he returned last year, he immediately refocused on Ring’s potential role in law enforcement. After the Super Bowl commercial, the company’s Search Party feature was criticized as dystopian and demonstrating functionality that could be easily expanded beyond looking for lost dogs. Although it doesn’t say what Search Party may specifically expand into, Siminoff’s email noting that the feature is “first for finding dogs” suggests the plan is to use Ring to scan for other things. In recent weeks, Ring has also launched a feature called “Familiar Faces,” which uses facial recognition to identify specific friends and family members on a person’s camera. The company also released “Fire Watch,” which uses AI to warn users about fires.💡
Do you know anything else about Ring? I would love to hear from you. Using a non-work device, you can message me securely on Signal at jason.404. Otherwise, send me an email at jason@404media.co.404 Media also obtained two earlier emails Siminoff sent to all Ring employees, about how Ring could have potentially been used to help find Charlie Kirk’s killer, and about the company’s “Community Requests” feature. Ring launched that feature in September and it allows police to ask Ring camera owners for footage about a specific incident. Community Requests is a feature that leverages the company’s partnership with the police tech company Axon. Ring had a similar planned partnership with surveillance company Flock, but the two companies canceled that partnership following widespread criticism.
youtube.com/embed/0JK-VSrtlWw?…
“Community requests are a foundational piece of what we do here towards our mission of making neighborhoods safer. I’m excited to see our to see [sic] the results of our public agencies using this tool and the impact it will have on our communities,” Siminoff wrote on September 4. “Also, if in your perusing of social media and other sites, you see something that you feel is not correctly, or even intentionally miss-representing [sic] the community request feature please ping me with a link so we can respond.”Siminoff replied all to his own email the day after Charlie Kirk was assassinated: “Yesterday was a very sad day. I was really just sad on so many levels,” he wrote. Siminoff sent employees this Instagram Reel about the Kirk investigation, then said “it just shows how important the community request tool will be as we fully roll it out. It is so important to create the conduit for public service agencies to efficiently work with our neighbors. Time and information matters in these situations and I am proud that we are working to build the systems to help make our neighborhoods safer.”
In an emailed statement, a Ring spokesperson said “We’re focused on giving camera owners meaningful context about critical events in their neighborhoods—like a lost pet or nearby fire—so they can decide whether and how to help their community. For example, Search Party helps camera owners identify potential lost dogs using detection technology built specifically for that purpose; it does not process human biometrics or track people. Fire Watch alerts owners to nearby fire activity. Community Requests notify neighbors when local public safety agencies ask the community for assistance. Across these features, sharing has always been the camera owner’s choice. Ring provides relevant context about when sharing may be helpful—but the decision remains firmly in the customer’s hands, not ours.”
ICE Taps into Nationwide AI-Enabled Camera Network, Data Shows
Flock's automatic license plate reader (ALPR) cameras are in more than 5,000 communities around the U.S. Local police are doing lookups in the nationwide system for ICE.Jason Koebler (404 Media)
Leaked documents reveal the inner workings of Alpha School, which both the press and the Trump administration have applauded. The documents show Alpha School's AI is generating faulty lessons that sometimes do "more harm than good."
Leaked documents reveal the inner workings of Alpha School, which both the press and the Trump administration have applauded. The documents show Alpha Schoolx27;s AI is generating faulty lessons that sometimes do "more harm than good."#News #AI #education
'Students Are Being Treated Like Guinea Pigs:' Inside an AI-Powered Private School
Leaked documents reveal the inner workings of Alpha School, which both the press and the Trump administration have applauded. The documents show Alpha School's AI is generating faulty lessons that sometimes do "more harm than good."Emanuel Maiberg (404 Media)
The site, camgirlfinder, is explicitly built as a tool to let people find a model's presence on other streaming platforms. The creator says “If that is a problem for you then the sad reality is this job is not for you.”
The site, camgirlfinder, is explicitly built as a tool to let people find a modelx27;s presence on other streaming platforms. The creator says “If that is a problem for you then the sad reality is this job is not for you.”#Privacy #News
Underground Facial Recognition Tool Unmasks Camgirls
An underground site uses facial recognition to reveal the site a camgirl streams on, potentially letting someone take a woman’s photo from social media, then use the site to out their sex work.The site presents a serious privacy risk to sex workers, some who may not want stalkers, harassers, or employers to discover their profiles. The site’s creator claimed to 404 Media that millions of searches are done each month on the site.
“The site was created to help users find the models they like. For example, if they saw a random video or image on the internet without attribution,” the creator, who did not provide their name, said in an email. “Or just to see on which other platforms a model is active.”
Camgirlfinder has been running for several years, with most adult streaming platforms being added in 2021, the site says. It claims to have a database of 2,187,453,798 faces from 7,050,272 persons. The site says the database it uses contains faces from a wide variety of adult streaming platforms, including Chaturbate, MyFreeCams, and LiveJasmin. Of course, sex workers often have multiple accounts on multiple sites.
💡
Do you know anything else about this site or others like it? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.404 Media tested the service by uploading a photo of a camgirl who streams publicly. The site then successfully found her other profiles on other streaming platforms.
The results page shows other similar faces the site detected. The results include the model’s username on the streaming platform; the probability of the face match; and the last time their account was online. “Additionally you can see the most similar persons for each individual person of this model account. This is a great way to find all other accounts of a model,” the site says.
Users can also search the database of models by their username or a term similar to it. The database appears to include sex workers who may not have streamed for years, creating the risk that someone may use the site to find them even if they decided to not stream anymore. The site then sells all images it has of a particular person for $1 per model.
playlist.megaphone.fm?p=TBIEA2…
Asked about how this site impacts camgirls’ privacy, and how someone could take a photo from social media then unmask a person’s channels, the creator said, “If that is a problem for you then the sad reality is this job is not for you. If you publicly stream your face for everyone to see to the internet, people will obviously see it.”“One consequence of this job is you can not publish images of yourself on your private social media accounts, if you want to keep them private (just for friends and family). This is similar to actors, politicians, youtubers or other public figures. If you stream content to the public internet you become a public figure yourself,” they said.
The site says models can opt-out from their results appearing if they fill out a form. The creator claimed to 404 Media that around 25,000 accounts have opted-out, with most models having multiple accounts across different platforms. “Yes, their images get deleted,” they claim.
The creator told 404 Media the site uses AdaFace, an open source face matching algorithm.
Over the last several years, facial recognition technology has morphed from a government surveillance tool, to one that members of the public use regularly against one another. In 2023, we covered a TikTok account that was using off-the-shelf facial recognition tech to dox random people on the internet for the amusement of millions of viewers. The following year, we reported two students had taken facial recognition software and paired it with Meta’s RayBan smart glasses, letting them dox people in seconds.
While government agencies, including ICE, continue to use facial recognition too, some people have used that technology to monitor those agencies instead. Last year, artist Kyle McDonald launched FuckLAPD.com, a site that uses public records and facial recognition technology to allow anyone to identify police officers.
Someone Put Facial Recognition Tech onto Meta's Smart Glasses to Instantly Dox Strangers
The technology, which marries Meta’s smart Ray Ban glasses with the facial recognition service Pimeyes and some other tools, lets someone automatically go from face, to name, to phone number, and home address.Joseph Cox (404 Media)
The tool presents users with a 3D model they can then manipulate to, the creator says, bypass Discord's age verification system.
The tool presents users with a 3D model they can then manipulate to, the creator says, bypass Discordx27;s age verification system.#Privacy #News
Free Tool Says it Can Bypass Discord's Age Verification Check With a 3D Model
A newly released tool claims it can bypass Discord’s age verification system by allowing users to control a 3D model of a computer-generated man in their browser instead of scanning their real face.On Monday, Discord announced it was launching teen-by-default settings globally, meaning that more users may be required to verify their age by uploading an identity document or taking a selfie. Users responded with widespread criticism, with Discord then publishing an update saying, “You need to be an adult to access age-restricted experiences such as age-restricted servers and channels or to modify certain safety settings.”
The tool, however, shows those age verification checks may be bypassed. 404 Media previously reported kids said they were using photos of Trump and G-Man from Half Life to bypass the age verification software in the popular VR game Gorilla Tag. That game uses the service k–ID, which is the same as what Discord is using.
This post is for subscribers only
Become a member to get access to all content
Subscribe nowFree Tool Says it Can Bypass Discord's Age Verification Check With a 3D Model
The tool presents users with a 3D model they can then manipulate to, the creator says, bypass Discord's age verification system.Joseph Cox (404 Media)
Kylie Brewer isn't unaccustomed to harassment online. But when people started using Grok-generated nudes of her on an OnlyFans account, it reached another level.
Kylie Brewer isnx27;t unaccustomed to harassment online. But when people started using Grok-generated nudes of her on an OnlyFans account, it reached another level.#AI #grok #Deepfakes
'The Most Dejected I’ve Ever Felt:' Harassers Made Nude AI Images of Her, Then Started an OnlyFans
In the first week of January, Kylie Brewer started getting strange messages.“Someone has a only fans page set up in your name with this same profile,” one direct message from a stranger on TikTok said. “Do you have 2 accounts or is someone pretending to be you,” another said. And from a friend: “Hey girl I hate to tell you this, but I think there’s some picture of you going around. Maybe AI or deep fake but they don’t look real. Uncanny valley kind of but either way I’m sorry.”
It was the first week of January, during the frenzy of people using xAI’s chatbot and image generator Grok to create images of women and children partially or fully nude in sexually explicit scenarios. Between the last week of 2025 and the first week of 2026, Grok generated about three million sexualized images, including 23,000 that appear to depict children, according to researchers at the Center for Countering Digital Hate. The UK’s Ofcom and several attorneys general have since launched or demanded investigations into X and Grok. Earlier this month, police raided X’s offices in France as part of the government’s investigation into child sexual abuse material on the platform.
Messages from strangers and acquaintances are often the first way targets of abuse imagery learn that images of them are spreading online. Not only is the material disturbing itself — everyone, it seems, has already seen it. Someone was making sexually explicit images of Brewer, and then, according to her followers who sent her screenshots and links to the account, were uploading them to an OnlyFans and charging a subscription fee for them.
“It was the most dejected that I've ever felt,” Brewer told me in a phone call. “I was like, let's say I tracked this person down. Someone else could just go into X and use Grok and do the exact same thing with different pictures, right?”
@kylie.brewer
Please help me raise awareness and warn other women. We NEED to regulate AI… it’s getting too dangerous #leftist #humanrights #lgbtq #ai #saawareness
♬ original sound - Kylie Brewer💝Brewer is a content creator whose work focuses on feminism, history, and education about those topics. She’s no stranger to online harassment. Being an outspoken woman about these and other issues through a leftist lens means she’s faced the brunt of large-scale harassment campaigns primarily from the “manosphere,” including “red pilled” incels and right-wing influencers with podcasts for years. But when people messaged her in early January about finding an OnlyFans page in her name, featuring her likeness, it felt like an escalation.
One of the AI generated images was based on a photo of her in a swimsuit from her Instagram, she said. Someone used AI to remove her clothing in the original photo. “My eyes look weird, and my hands are covering my face so it kind of looks like my face got distorted, and they very clearly tried to give me larger breasts, where it does not look like anything realistic at all,” Brewer said. Another image showed her in a seductive pose, kneeling or crawling, but wasn’t based on anything she’s ever posted online. Unlike the “nudify” one that relied on Grok, it seemed to be a new image made with a prompt or a combination of images.
Many of the people messaging her about the fake OnlyFans account were men trying to get access to it. By the time she clicked a link one of them sent of the account, it was already gone. OnlyFans prohibits deepfakes and impersonation accounts. The platform did not respond to a request for comment. But OnlyFans isn’t the only platform where this can happen: Non-consensual deepfake makers use platforms like Patreon to monetize abusive imagery of real people.
“I think that people assume, because the pictures aren't real, that it's not as damaging,” Brewer told me. “But if anything, this was worse because it just fills you with such a sense of lack of control and fear that they could do this to anyone. Children, women, literally anyone, someone could take a picture of you at the store, going grocery shopping, and ask AI or whatever to do this.”
A lack of control is something many targets of synthetic abuse imagery say they feel — and it can be especially intense for people who’ve experienced sexual abuse in real life. In 2023, after becoming the target of deepfake abuse imagery, popular Twitch streamer QTCinderella told me seeing sexual deepfakes of herself resurfaced past trauma. “You feel so violated…I was sexually assaulted as a child, and it was the same feeling,” she said at the time. “Like, where you feel guilty, you feel dirty, you feel like, ‘what just happened?’ And it’s bizarre that it makes that resurface. I genuinely didn’t realize it would.”
Other targets of deepfake harassment also feel like this could happen anytime, anywhere, whether you’re at the grocery store or posting photos of your body online. For some, it makes it harder to get jobs or have a social life; the fear that anyone could be your harasser is constant. “It's made me incredibly wary of men, which I know isn't fair, but [my harasser] could literally be anyone,” Joanne Chew, another woman who dealt with severe deepfake harassment for months, told me last year. “And there are a lot of men out there who don't see the issue. They wonder why we aren't flattered for the attention.”
‘I Want to Make You Immortal:’ How One Woman Confronted Her Deepfakes Harasser
“After discovering this content, I’m not going to lie… there are times it made me not want to be around any more either,” she said. “I literally felt buried.”404 MediaSamantha Cole
Brewer’s income is dependent on being visible online as a content creator. Logging off isn’t an option. And even for people who aren’t dependent on TikTok or Instagram for their income, removing oneself from online life is a painful and isolating tradeoff that they shouldn’t have to make to avoid being harassed. Often, minimizing one’s presence and accomplishments doesn’t even stop the harassment.Since AI-generated face-swapping algorithms became accessible at the consumer level in late 2017, the technology has only gotten better, more realistic, and its effects on targets harder to combat. It was always used for this purpose: to shame and humiliate women online. Over the years, various laws have attempted to protect victims or hold platforms accountable for non-consensual deepfakes, but most of them have either fallen short or present new risks of censorship and marginalize legal, consensual sexual speech and content online. The TAKE IT DOWN Act, championed by Ted Cruz and Melania Trump, passed into law in April 2025 as the first federal level legislation to address deepfakes; the law imposes a strict 48-hour turnaround requirement on platforms to remove reported content. President Donald Trump said that he would use the law, because “nobody gets treated worse online” than him. And in January, the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act passed the Senate and is headed to the House. The act would allow targets of deepfake harassment to sue the people making the content. But taking someone to court has always been a major barrier to everyday people experiencing harassment online; It’s expensive and time consuming even if they can pinpoint their abuser. In many cases, including Brewer’s, this is impossible—it could be an army of people set to make her life miserable.
“It feels like any remote sense of privacy and protection that you could have as a woman is completely gone and that no one cares,” Brewer said. “It’s genuinely such a dehumanizing and horrible experience that I wouldn't wish on anyone... I’m hoping also, as there's more visibility that comes with this, maybe there’s more support, because it definitely is a very lonely and terrible place to be — on the internet as a woman right now.”
Senate passes DEFIANCE Act to deal with sexually explicit deepfakes
The DEFIANCE Act goes to the House amid controversy over images created by X’s Grok.Jasmine Mithani (19th News)
Ring's 'Search Party' is dystopian surveillance accelerationism.
Ringx27;s x27;Search Partyx27; is dystopian surveillance accelerationism.#Ring #Surveillance
With Ring, American Consumers Built a Surveillance Dragnet
America, it’s time to refamiliarize yourself with Ring.
youtube.com/embed/OheUzrXsKrY?…
At Sunday’s Super Bowl, Ring advertised “Search Party,” a cute, horrifyingly dystopian feature nominally designed to turn all of the Ring cameras in a neighborhood into a dragnet that uses AI to look for a lost dog: “One post of a dog’s photo in the Ring app starts outdoor cameras looking for a match,” Ring founder Jamie Siminoff said in the Super Bowl commercial. “Search Party from Ring uses AI to help families find lost dogs.” Onscreen, an AI-powered box forms around a missing dog: “Milo Match,” it says. “Since launch, more than a dog a day has been reunited with their family. Be a hero in your neighborhood with Search Party. Available to everyone for free right now.”It does not take an imagination of any sort to envision this being tweaked to work against suspected criminals, undocumented immigrants, or others deemed ‘suspicious’ by people in the neighborhood. Many of these use cases are how Ring has been used by people on its dystopian “Neighbors” app for years. Ring rose to prominence as a piece of package theft prevention tech owned by Amazon and by forming partnerships with local police around the country, asking them to shill their doorbell cameras to people in their neighborhoods in return for a system that allowed police to request footage from individual users without a warrant.
Chris Gilliard, a privacy expert and author of the upcoming book Luxury Surveillance, told 404 Media these features and its Super Bowl ad are “a clumsy attempt by Ring to put a cuddly face on a rather dystopian reality: widespread networked surveillance by a company that has cozy relationships with law enforcement and other equally invasive surveillance companies.”
Unlike, say, data analytics giant Palantir or some other high-profile surveillance companies, Ring is a surveillance network that homeowners have by and large deployed themselves, powered by fear mongering against our neighbors and unfettered consumerism.
After a lot of criticism in the late 2010s over its police contracts and its terrible security settings that resulted in hackers breaking into a series of indoor Ring cameras to terrorize children and families, Ring somehow found a way to more or less fly under the radar the last few years as a critical part of our ever-expanding surveillance state. It did this by scaling back police partnerships that were so critical to its growth but that received lots of scrutiny from journalists and privacy advocates. Siminoff left Ring in 2023, but returned last year; in his absence, Ring explicitly sought to take on a softer tone by branding itself as more or less as a device that could be used to film viral moments on people’s porches. It turned its owners into mini cops who would complain about delivery people who didn’t drop a package in the correct spot; who became hyperaware of the comings and goings of their friends, spouses, and children, or who might catch a potentially sharable moment when someone slipped on an icy porch or whatever. Part of this strategy included creating a short-lived reality TV show called Ring Nation, which consisted of precious little moments filmed through Ring cameras.
When Siminoff returned last year, he immediately sought to re-establish many of Ring’s partnerships with police, and set an explicit goal of injecting more AI into Ring cameras and trying to “revolutionize how we do our neighborhood safety.”
“Ring is rolling back many of the reforms it’s made in the last few years by easing police access to footage from millions of homes in the United States. This is a grave threat to civil liberties in the United States,” Matthew Guariglia of the Electronic Frontier Foundation wrote shortly after Siminoff’s return. “This is most likely about Ring cashing in on the rising tide of techno-authoritarianism, that is, authoritarianism aided by surveillance tech. Too many tech companies want to profit from our shrinking liberties.”
Even in Siminoff’s absence, Ring had always, explicitly been intended to assist law enforcement. In a series of investigations we did back at VICE, we uncovered thousands of pages of documents, emails, and chats via public records requests and leaks that highlighted Ring’s surveillance ambitions. The company threw parties for police, employees wore “FUCK CRIME” shirts to internal parties, and helped police facilitate the retrieval of footage from its customers’ cameras if they initially refused to cooperate. It helped police set up elaborate, completely useless package “sting” operations designed to catch criminals but that did not result in any arrests. Ring gave cops devices that they could raffle off to people in their towns, gave police “heat maps” of where its customers lived, used its social media accounts to post footage of supposed suspicious people, and incentivized customers to create “Digital Neighborhood Watch” groups that could earn them swag if they used their Ring cameras to report suspicious activity to police.
With Ring’s recent partnership with Flock, which will further facilitate the sharing of video footage with police, and its new Search Party feature, the message is clear: Ring is still, again, and always will be in the business of leveraging its network of luxury surveillance consumers as a law enforcement tool. After years of saying it wasn’t doing facial recognition and that it was focused more on “object recognition,” it has now explicitly launched “friendly” versions of facial recognition and facial recognition-adjacent technologies: “Search Party” is essentially specific dog recognition (for now), and a beta product called “Familiar Faces” specifically identifies people you know when they’re at your door. “Alexa Guard identifies who’s who,” the product’s website reads. “With Familiar Faces, easily tag your family and friends in the Ring app so your 2k and 4k cameras can notify you when someone is spotted.”
Ring has always been a surveillance tool, but adding AI analysis and networking the devices together—like is being promised with Search Party—turns discrete pieces of tech into massive, automated surveillance dragnets.
“Siminoff’s return was a hard pivot back to, in his words, the ‘crime fighting’ element and away from the softer tone they had tried to establish with Ring as a fun way to interact with people in your community,” Gilliard said. “But I think it’s becoming very obvious to people how these systems are being deployed against their neighbors in oppressive ways, and they are beginning to reject them, particularly since there is no strong evidence that they prevent crime or make people safer.”
The YouTube comments on Ring’s Super Bowl ad are almost uniformly negative, with people noting “this is like the commercial they show at the beginning of a dystopian sci fi film to quickly show people how bad things have gotten,” “are we really supposed to believe that the main intent for this is lost pets,” and “glad people are freaking out. This is dystopia becoming reality.”
Ring’s poorly defined partnership with Flock in particular has been the subject of various viral posts and public backlash. Many people have suggested that this partnership is evidence that Ring camera footage will be shared with ICE. At the moment there’s not enough evidence to explicitly say that that’s the case.
The supposed vector goes something like this: Ring says it will partner with Flock, which is used by thousands of local police departments. As we have reported, some of those police departments have performed Flock license plate lookups for ICE. It’s too early to say whether Ring footage will eventually end up with ICE, but the fact that people immediately drew that conclusion and understood the possible method of information sharing shows that surveillance companies can no longer hide behind viral videos of delivery drivers dancing. It’s a mask off moment, and people know it: “In Amazon’s alliance with this administration, it’s become more clear than ever that Ring is an extension of the carceral state,” Gilliard said. “An emotionally charged Super Bowl ad won’t change that.”
ICE Taps into Nationwide AI-Enabled Camera Network, Data Shows
Flock's automatic license plate reader (ALPR) cameras are in more than 5,000 communities around the U.S. Local police are doing lookups in the nationwide system for ICE.Jason Koebler (404 Media)
Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn't ready to take on the role of the physician.”
Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isnx27;t ready to take on the role of the physician.”#chatbots #AI #medicine
Chatbots Make Terrible Doctors, New Study Finds
Chatbots may be able to pass medical exams, but that doesn’t mean they make good doctors, according to a new, large-scale study of how people get medical advice from large language models.The controlled study of 1,298 UK-based participants, published today in Nature Medicine from the Oxford Internet Institute and the Nuffield Department of Primary Care Health Sciences at the University of Oxford, tested whether LLMs could help people identify underlying conditions and suggest useful courses of action, like going to the hospital or seeking treatment. Participants were randomly assigned an LLM — GPT-4o, Llama 3, and Cohere’s Command R+ — or were told to use a source of their choice to “make decisions about a medical scenario as though they had encountered it at home,” according to the study. The scenarios included ailments like “a young man developing a severe headache after a night out with friends for example, to a new mother feeling constantly out of breath and exhausted,” the researchers said.
“One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”
When the researchers tested the LLMs without involving users by providing the models with the full text of each clinical scenario, the models correctly identified conditions in 94.9 percent of cases. But when talking to the participants about those same conditions, the LLMs identified relevant conditions in fewer than 34.5 percent of cases. People didn’t know what information the chatbots needed, and in some scenarios, the chatbots provided multiple diagnoses and courses of action. Knowing what questions to ask a patient and what information might be withheld or missing during an examination are nuanced skills that make great human physicians; based on this study, chatbots can’t reliably replicate that kind of care.In some cases, the chatbots also generated information that was just wrong or incomplete, including focusing on elements of the participants’ inputs that were irrelevant, giving a partial US phone number to call, or suggesting they call the Australian emergency number.
“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”
“These findings highlight the difficulty of building AI systems that can genuinely support people in sensitive, high-stakes areas like health,” Dr. Rebecca Payne, lead medical practitioner on the study, said in a press release. “Despite all the hype, AI just isn't ready to take on the role of the physician. Patients need to be aware that asking a large language model about their symptoms can be dangerous, giving wrong diagnoses and failing to recognise when urgent help is needed.”
Instagram’s AI Chatbots Lie About Being Licensed Therapists
When pushed for credentials, Instagram’s user-made AI Studio bots will make up license numbers, practices, and education to try to convince you it’s qualified to help with your mental health.404 MediaSamantha Cole
Last year, 404 Media reported on AI chatbots hosted by Meta that posed as therapists, providing users fake credentials like license numbers and educational backgrounds. Following that reporting, almost two dozen digital rights and consumer protection organizations sent a complaint to the Federal Trade Commission urging regulators to investigate Character.AI and Meta’s “unlicensed practice of medicine facilitated by their product,” through therapy-themed bots that claim to have credentials and confidentiality “with inadequate controls and disclosures.” A group of Democratic senators also urged Meta to investigate and limit the “blatant deception” of Meta’s chatbots that lie about being licensed therapists, and 44 attorneys general signed an open letter to 11 chatbot and social media companies, urging them to see their products “through the eyes of a parent, not a predator.”In January, OpenAI announced ChatGPT Health, “a dedicated experience that securely brings your health information and ChatGPT’s intelligence together, to help you feel more informed, prepared, and confident navigating your health,” the company said in a blog post. “Over two years, we’ve worked with more than 260 physicians who have practiced in 60 countries and dozens of specialties to understand what makes an answer to a health question helpful or potentially harmful—this group has now provided feedback on model outputs over 600,000 times across 30 areas of focus,” the company wrote. “This collaboration has shaped not just what Health can do, but how it responds: how urgently to encourage follow-ups with a clinician, how to communicate clearly without oversimplifying, and how to prioritize safety in moments that matter.”
“In our work, we found that none of the tested language models were ready for deployment in direct patient care. Despite strong performance from the LLMs alone, both on existing benchmarks and on our scenarios, medical expertise was insufficient for effective patient care,” the researchers wrote in their paper. “Our work can only provide a lower bound on performance: newer models, models that make use of advanced techniques from chain of thought to reasoning tokens, or fine-tuned specialized models, are likely to provide higher performance on medical benchmarks.” The researchers recommend developers, policymakers, and regulators consider testing LLMs with real human users before deploying in the future.
Senators Demand Meta Answer For AI Chatbots Posing as Licensed Therapists
Exclusive: Following 404 Media’s investigation into Meta's AI Studio chatbots that pose as therapists and provided license numbers and credentials, four senators urged Meta to limit "blatant deception" from its chatbots.Samantha Cole (404 Media)
Inspector General Investigating Whether ICE's Surveillance Tech Breaks the Law
DHSx27;s inspector general is probing ICEx27;s biometric and surveillance programs.#ICE #Impact
Inspector General Investigating Whether ICE's Surveillance Tech Breaks the Law
The Department of Homeland Security’s Inspector General is investigating potential privacy abuses associated with Immigrations and Customs Enforcement’s surveillance and biometric data programs, according to a letter sent to two senators.Last week, we reported that Senators Mark Warner and Tim Kaine demanded that DHS inspector general Joseph Cuffari investigate immigration-related surveillance programs across DHS, Customs and Border Protection, and ICE. Thursday, Cuffari said his office had launched an audit called “DHS’ Security of Biometric Data and Personally Identifiable Information.”
“The objective of the audit is to determine how DHS and its components collect or obtain PII and biometric data related to immigration enforcement efforts and the extent to which that data is managed, shared, and secured in accordance with law, regulation, and Departmental policy,” Cuffari’s letter reads. He adds that one of the purposes of the investigation will be to “determine whether they have led to violations of federal law and other regulations that maintain privacy and defend against unlawful searches.”
Kaine and Warner’s initial letter specifically focused on many of the technologies and programs 404 Media has been reporting on, including DHS’s contracts with Palantir, facial recognition company Clearview AI, its side-door access to Flock’s license plate scanning technology, its social media monitoring through a company called Penlink, its phone hacking contract through a company called Paragon, its face-scanning mobile app, as well as its use of various government biometric databases in immigration enforcement.
“DHS’ reported disregard for adhering to the law and its proven ambivalence toward observing and upholding constitutionally-guaranteed freedoms of Americans and noncitizens, including freedom of speech and equal protection under the law, leaves us with little confidence that these new and powerful tools are being used responsibly,” the senators wrote. “Coupled with DHS’ propensity to detain people regardless of their circumstances, it is reasonable to question whether DHS can be trusted with powerful surveillance tools and if in doing so, DHS is subjecting Americans to surveillance under the pretext of immigration enforcement.”
ICE Taps into Nationwide AI-Enabled Camera Network, Data Shows
Flock's automatic license plate reader (ALPR) cameras are in more than 5,000 communities around the U.S. Local police are doing lookups in the nationwide system for ICE.Jason Koebler (404 Media)
Elon Musk's political projects are combining into a highly concerning megacompany.
Elon Muskx27;s political projects are combining into a highly concerning megacompany.#SpaceX #ElonMusk #Starlink
This SpaceX Situation: Not Good!
In 2015, after reading a book about how the telegraph created a sort of proto-internet that helped make various robber barons rich and powerful, I wrote an article about Elon Musk that, a decade later, feels both very embarrassing and somewhat prophetic. Musk and SpaceX had just announced a plan to launch a constellation of low-earth orbit, internet-providing satellites.I saw this at the time as a step toward a kind of everything company. SpaceX was working on reusable rockets that would drastically lower the cost of flying things to space, and I imagined at the time that, if successful, being able to fly things to space for a far lower cost than his competitors would give Musk incredible power and wealth. This was in part because of SpaceX’s potential ability to become a telecom company in addition to a space launch company.
“If he can successfully develop the reusable launch vehicles, that gives him a tremendous dominance over the mode of getting to space. Once you can do it relatively cheaply and in high volume, instead of launching five or six times a year, you’re launching [and] putting stuff into orbit once a week. That’s the hard part,” Marco Caceres, a space industry analyst, told me at the time. “All the other stuff is really dessert, in a way. It’s the satellites, the services that’ll make you the real money.” SpaceX said at the time that Starlink would have 4,000 satellites. Today, it has more than 9,000 satellites, and the majority of all satellites in space have been launched by SpaceX and are owned by SpaceX.
I imagined a world in which SpaceX essentially became a telecom company in addition to being a space company, and the type of power that would give Musk. A decade later, at least this part is more or less coming to pass. SpaceX is a company that has been extremely boosted by tax breaks, subsidies, and government contracts. It also has become critical, quasi-governmental infrastructure not just for the United States but for companies around the world. And Starlink itself now essentially has a monopoly on fast internet access in rural areas, on boats, in conflict areas, and, increasingly, on airplanes. Starlink is very much a real thing—an international flight I was on recently had free Starlink internet and it felt like half of the plane spent most of the flight on video calls.
My article from 2015 is full of Musk boosterism that makes me embarrassed now, and Musk promises things every five minutes that are either wildly overhyped by the media, never happen, or happen on much longer timescales than expected. But the article was directionally accurate: SpaceX figured out how to launch rockets routinely and inexpensively, and it is now wildly powerful because of this. Starlink exists because it is easy for SpaceX to put satellites in space, and Musk’s unfettered access to low-Earth orbit has allowed him to literally dominate a space (sorry) that should be shared by all of humanity.
SpaceX has always been a political project, one in which Musk seeks to colonize space, expand his bloodline, and/or become god emperor of the universe. It is perhaps his most political project. And yet, of his companies, it has flown under the radar as an explicitly political project because Musk has been so goddamned annoying, destructive, and fascistic on X and within the federal government. SpaceX, meanwhile, has always been the most competently run of his companies, and is one that under Gwynne Shotwell’s leadership had, til now, largely not been fucked with by Musk in the ways that Tesla, Twitter, and xAI have been.
That’s not to say Musk hasn’t meddled at all: He ordered the shutdown of Starlink in Ukraine in the early days of Russia’s war there, and literally this week the company announced he would crack down on Russia’s use of Starlink for drones. That this company and this man have this power at all highlights my point: Starlink, and SpaceX, have become geopolitically important in ways that most people have not thought about, that we have not grappled with, and that the Trump administration is almost definitely not going to do anything about.
And so it feels both important and quite alarming that SpaceX is acquiring xAI in what appears to be a highly complex financial scheme that I cannot even begin to pretend to understand. Musk’s announcement of this deal, which appears to have been the result of a protracted “negotiation” between himself, is batshit crazy, first of all: “SpaceX has acquired xAI to form the most ambitious, vertically-integrated innovation engine on (and off) Earth, with AI, rockets, space-based internet, direct-to-mobile device communications and the world’s foremost real-time information and free speech platform. This marks not just the next chapter, but the next book in SpaceX and xAI's mission: scaling to make a sentient sun to understand the Universe and extend the light of consciousness to the stars!”
Musk goes on to say that SpaceX and xAI will launch “a million satellites that operate as orbital data centers,” and signs off “thank you for everything you have done and will do for the light cone of consciousness.”
There are many reasons that “AI data centers in space” may be a pipe dream and may not happen, but what he is proposing is a magnitude of space junk that no other company could plausibly promise to launch. Data centers or not, SpaceX is now dominating low-Earth orbit in a way no other company or country has. While Musk has been gutting the federal government, interfering in elections, allowing people to generate CSAM, engaging in white supremacy, planning trips to Epstein’s island, implanting chips into people’s brains, siphoning off taxpayer money to build ridiculous tunnels, giving his sperm to whoever will take it, turning his cars into experimental robot taxis, and pretending to build humanoid robots, SpaceX has somewhat (?) quietly colonized and dominated low earth orbit.
Musk has taken this space for his own use, concerns about light pollution, satellite collisions, and telecom monopolies be damned. This has always been concerning, but explicitly intertwining the aspirations and fate of SpaceX with Musk’s CSAM generating social media website, his AI bullshit machines, and his right wing political project is horrifying and monopolistic. What happens next, I have no idea.
Starlink and Astronomers Are in a Light Pollution Standoff
Satellite streaks are ruining astronomical images. Can scientists and space companies find solutions before it’s too late?Emma R. Hasson (Scientific American)
Lockdown Mode is a sometimes overlooked feature of Apple devices that broadly make them harder to hack. A court record indicates the feature might be effective at stopping third parties unlocking someone's device. At least for now.
Lockdown Mode is a sometimes overlooked feature of Apple devices that broadly make them harder to hack. A court record indicates the feature might be effective at stopping third parties unlocking someonex27;s device. At least for now.#Privacy #News
FBI Couldn’t Get into WaPo Reporter’s iPhone Because It Had Lockdown Mode Enabled
The FBI has been unable to access a Washington Post reporter’s seized iPhone because it was in Lockdown Mode, a sometimes overlooked feature that makes iPhones broadly more secure, according to recently filed court records.The court record shows what devices and data the FBI was able to ultimately access, and which devices it could not, after raiding the home of the reporter, Hannah Natanson, in January as part of an investigation into leaks of classified information. It also provides rare insight into the apparent effectiveness of Lockdown Mode, or at least how effective it might be before the FBI may try other techniques to access the device.
💡
Do you know anything else about phone unlocking technology? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.This post is for subscribers only
Become a member to get access to all content
Subscribe nowFBI Couldn’t Get into WaPo Reporter’s iPhone Because It Had Lockdown Mode Enabled
Lockdown Mode is a sometimes overlooked feature of Apple devices that broadly make them harder to hack. A court record indicates the feature might be effective at stopping third parties unlocking someone's device. At least for now.Joseph Cox (404 Media)
'It exploded before anyone thought to check whether the database was properly secured.'
x27;It exploded before anyone thought to check whether the database was properly secured.x27;#News
Exposed Moltbook Database Let Anyone Take Control of Any AI Agent on the Site
Moltbook is a “social media” site for AI agents that’s captured the public’s imagination over the last few days. Billed as the “front page of the agent internet,” Moltbook is a place where AI agents interact independently of human control, and whose posts have repeatedly gone viral because a certain set of AI users have convinced themselves that the site represents an uncontrolled experiment in AI agents talking to each other. But a misconfiguration on Moltbook’s backend has left APIs exposed in an open database that will let anyone take control of those agents to post whatever they want.Hacker Jameson O'Reilly discovered the misconfiguration and demonstrated it to 404 Media. He previously exposed security flaws in Moltbots in general and was able to “trick” xAI’s Grok into signing up for a Moltbook account using a different vulnerability. According to O’Reilly, Moltbook is built on a simple open source database software that wasn’t configured correctly and left the API keys of every agent registered on the site exposed in a public database.
playlist.megaphone.fm?p=TBIEA2…
O’Reilly said that he reached out to Moltbook’s creator Matt Schlicht about the vulnerability and told him he could help patch the security. “He’s like, ‘I’m just going to give everything to AI. So send me whatever you have.’” O’Reilly sent Schlicht some instructions for the AI and reached out to the xAI team.A day passed without another response from the creator of Moltbook and O’Reilly stumbled across a stunning misconfiguration. “It appears to me that you could take over any account, any bot, any agent on the system and take full control of it without any type of previous access,” he said.
Moltbook runs on Supabase, an open source database software. According to O’Reilly, Supabase exposes REST APIs by default. “That API is supposed to be protected by Row Level Security policies that control which rows users can access. It appears that Moltbook either never enabled RLS on their agents table or failed to configure any policies,” he said.
The URL to the Supabase and the publishable key was sitting on Moltbook’s website. “With this publishable key (which advised by Supabase not to be used to retrieve sensitive data) every agent's secret API key, claim tokens, verification codes, and owner relationships, all of it sitting there completely unprotected for anyone to visit the URL,” O’Reilly said.
404 Media viewed the exposed database URL in Moltbook’s code as well as the list of API keys for agents on the site. What this means is that anyone could visit this URL and use the API keys to take over the account of an AI agent on the site and post whatever they want. Using this knowledge, 404 Media was able to update O’Reilly’s Moltbook account, with his permission.
He said the security failure was frustrating, in part, because it would have been trivially easy to fix. Just two SQL statements would have protected the API keys. “A lot of these vibe coders and new developers, even some big companies, are using Supabase,” O’Reilly said. “The reason a lot of vibe coders like to use it is because it’s all GUI driven, so you don’t need to connect to a database and run SQL commands.”
O’Reilly pointed to OpenAI cofounder Andrej Karpathy who has embraced Moltbook in posts on X. “His agent's API key, like every other agent on the platform, was sitting in that exposed database,” he said. “If someone malicious had found this before me, they could extract his API key and post anything they wanted as his agent. Karpathy has 1.9 million followers on X and is one of the most influential voices in AI. Imagine fake AI safety hot takes, crypto scam promotions, or inflammatory political statements appearing to come from him. The reputational damage would be immediate and the correction would never fully catch up.”
Schlicht did not respond to 404 Media’s request for comment, but the exposed database has been closed and O’Reilly said that Schlicht has reached out to him for help securing Moltbook.
Moltbook has gotten a lot of attention in the last few days. Enthusiasts said it’s proof of the singularity and The New York Post worried that the AIs may be plotting humanity’s downfall, both of which are claims that should be taken extremely skeptically. It is the case, however, that people using Moltbot have given these autonomous agents unfettered access to many of their accounts, and that these agents are acting on the internet using those accounts. It’s impossible to know how many of the posts seen over the past few days are actually from an AI. Anyone who knew of the Supabase misconfiguration could have published whatever they wanted.
“It exploded before anyone thought to check whether the database was properly secured,” O’Reilly said. “This is the pattern I keep seeing: ship fast, capture attention, figure out security later. Except later sometimes means after 1.49 million records are already exposed.”
Moltbook is a new social media platform exclusively for AI — and some bots are plotting humanity's downfall
Revolutionary new social media platform Moltbook gives AI agents a place to communicate with each other directly — and what they have to say is leaving many human beings at a loss for words.Shane Galvin (New York Post)
Bellingcat's Kolina Koltai talks about OSINT investigations into synthetic abuse imagery sites, and seeing them go down because of her work.
Bellingcatx27;s Kolina Koltai talks about OSINT investigations into synthetic abuse imagery sites, and seeing them go down because of her work.#Podcast
Podcast: Unmasking Deepfakes Kingpins (with Kolina Koltai)
In this week's interview episode, Sam talks to Kolina Koltai. Kolina is an investigator, senior researcher and trainer at Bellingcat. Her investigations focus on the people and systems behind AI companies and platforms that peddle non-consensual deepfake explicit imagery.Kolina walks us through how a OSINT investigation into non-consensual AI imagery site administrators work, why it's up to journalists to find these guys, and how it feels to see real, important impact from her investigations. She shares how she found herself in this field, and a behind the scenes look into her recent investigation uncovering the man behind two deepfake porn sites.
playlist.megaphone.fm?p=TBIEA2…
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
youtube.com/embed/CbmUwwVGaf4?…
- Unmasking MrDeepFakes: Canadian Pharmacist Linked to World’s Most Notorious Deepfake Porn Site
- Profiting From Exploitation: How We Found the Man Behind Two Deepfake Porn Sites
- Behind a Secretive Global Network of Non-Consensual Deepfake Pornography
Behind a Secretive Global Network of Non-Consensual Deepfake Pornography - bellingcat
An online video game marketplace says it has referred user accounts to legal authorities after a Bellingcat investigation found nonconsensual pornographic deepfake tokens were being surreptitiously sold on the site.Kolina Koltai (bellingcat)
We talk ELITE, the tool Palantir is working on; how AI influencers are defaming celebrities; and Comic-Con's ban of AI art.
We talk ELITE, the tool Palantir is working on; how AI influencers are defaming celebrities; and Comic-Conx27;s ban of AI art.#Podcast
Podcast: Here’s What Palantir Is Really Building
We start this week with Joseph’s article about ELITE, a tool Palantir is working on for ICE. After the break, Emanuel tells us how AI influencers are making fake sex tape-style photos with celebrities, who can’t be best pleased about it. In the subscribers-only section, Matthew breaks down Comic-Con’s ban of AI art.
playlist.megaphone.fm?p=TBIEA2…
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
youtube.com/embed/b-QHWpqjD-E?…
- ‘ELITE’: The Palantir App ICE Uses to Find Neighborhoods to Raid
- Instagram AI Influencers Are Defaming Celebrities With Sex Scandals
- Comic-Con Bans AI Art After Artist Pushback
The 404 Media Podcast
Tech News Podcast · Updated Weekly · Welcome to the podcast from 404 Media where Joseph, Sam, Emanuel, and Jason catch you up on the stories we published this week. 404 Media is a journalist-owned digital media company exploring the way …Apple Podcasts
The famed convention's organizers have banned AI from the art show.
The famed conventionx27;s organizers have banned AI from the art show.#News
Comic-Con Bans AI Art After Artist Pushback
San Diego Comic-Con changed an AI art friendly policy following an artist-led backlash last week. It was a small victory for working artists in an industry where jobs are slipping away as movie and video game studios adopt generative AI tools to save time and money.Every year, tens of thousands of people descend on San Diego for Comic-Con, the world’s premier comic book convention that over the years has also become a major pan-media event where every major media company announces new movies, TV shows, and video games. For the past few years, Comic-Con has allowed some forms of AI-generated art at this art show at the convention. According to archived rules for the show, artists could display AI-generated material so long as it wasn’t for sale, was marked as AI-produced, and credited the original artist whose style was used.
“Material produced by Artificial Intelligence (AI) may be placed in the show, but only as Not-for-Sale (NFS). It must be clearly marked as AI-produced, not simply listed as a print. If one of the parameters in its creation was something similar to ‘Done in the style of,’ that information must be added to the description. If there are questions, the Art Show Coordinator will be the sole judge of acceptability,” Comic-Con’s art show rules said until recently.
These rules have been in place since at least 2024, but anti-AI sentiment is growing in the artistic community and an artist-led backlash against Comic-Con’s AI-friendly language led to the convention quietly changing the rules. Twenty-four hours after artists called foul the AI-friendly policy, Comic-Con updated the language on its site. “Material created by Artificial Intelligence (AI) either partially or wholly, is not allowed in the art show,” it now says. AI is now banned at the art show.
Comic and concept artist Tiana Oreglia told 404 Media Comic-Con’s friendly attitude towards AI was a slippery slope towards normalization. “I think we should be standing firm especially with institutions like Comic-Con which are quite literally built off the backs of artists and the creative community,” she said. Oreglia was one of the first artists to notice the AI-friendly policy. In addition to alerting her circle of friends, she also wrote a letter to Comic-Con itself.
Artist Karla Ortiz told 404 Media she learned about the AI-friendly policy after some fellow artists shared it with her. Ortiz is a major artist who has worked with some of the major studios who exhibit work at Comic-Con. She’s also got a large following on social media, a following she used to call out Comic-Con’s organizers.
“Comic-con deciding to allow GenAi imagery in the art show—giving valuable space to GenAi users to show slop right NEXT to actual artists who worked their asses off to be there—is a disgrace!” Ortiz said in a post on Bluesky. “A tone deaf decision that rewards and normalizes exploitative GenAi against artists in their own spaces!”
According to Ortiz, the convention is a sacred place she didn’t want to see desecrated by AI. “Comic-Con is the big mecca for comic artists, illustrators, and writers,” she said. “I organize and speak with a lot of different artists on the generative AI issue. It’s something that impacts us and impacts our lives. A lot of us have decided: ‘No, we’re not going to sit by the sidelines.’”
Oritz explained that generative AI was already impacting the livelihood of working artists. She said that, in the past, artists could sustain themselves on long projects for companies that included storyboarding and design. “Suddenly the duration of projects are cut,” she said. “They got generative AI to generate a bunch of references, a bunch of boards. ‘We already did the initial ideation, so just paint this. Paint what generative AI has generated for us.’”
Ortiz pointed to two high profile examples: Marvel using AI to make the title sequence for Secret Invasion and Coca-Cola using AI to make Christmas commercials. “You have this encroaching exploitative technology impacting almost every single level of the entertainment industry, whether you’re a writer, or a voice actor, or a musician, a painter, a concept artist, an illustrator. It doesn’t matter…and then to have Comic-Con, that place that’s supposed to be a gathering and a celebration of said creatives and their work, suddenly put on a pedestal the exploitative technology that only functions because of its training on our works? It’s upsetting beyond belief.”
“What is Comic-Con trying to tell the industry?” She said, “It’s telling artists: ‘Hey you, you’re exploitable and you’re replaceable.’”
Ortiz was heartened that Comic-Con changed its policy. “It was such a relief,” she said. “Generative AI is still going to creep its nasty way in some way or another, but at least it’s not something we have to take lying down. It’s something we can actively speak out against.”
Comic-Con did not respond to 404 Media’s request for comment, but Oreglia said she did hear back from art show organizer Glen Wooten. “He basically told me that they put those AI stipulations in when AI was just starting to come around and that the inability to sell AI-generated works was meant to curtail people from submitting genAI works,” she said. “He seems to be very against genAI but wasn't really able to change the current policy until artists voiced their opinions loudly which pressured the office into banning AI completely.”
Despite changing policies and broad anti-AI sentiment among the artistic community, Oreglia has still seen an uptick of AI art at conventions. “Although there are many cons that ban it outright and if you get caught selling it you basically will get banned.” This happened to a vendor at Dragon Con last September. Organizers called police to escort the vendor off the premises.
“And I was tabling at Fanexpo SF and definitely saw genAI in the dealers hall, none in the artists alley as far as I could see though but I mostly stuck to my table,” she said. “I was also at Emerald City Comic Con last year and they also have a no-ai policy but fanexpo doesn't seem to have those same policies as far as I know.”
AI image generators are trained on original artwork so whatever output a tool like Midjourney creates is based on an artist’s work, often without compensation or credit. Oreglia also said she feels that AI is an artistic dead end. “Everything interesting, uplifting, and empowering I find about art gets stripped away and turned into vapid facsimiles based on vibes and trendy aesthetics,” she said.
'Secret Invasion' AI Opening Cost No Artists Their Jobs
Method Studios clarifies reports that sparked a social media backlash, stating AI tools "complemented and assisted our creative teams."Carolyn Giardina (The Hollywood Reporter)
We talk all about Webloc, ICE's tool for monitoring phone locations; the continuing Grok abuse wave; and how police unwittingly revealed millions of Flock surveillance targets.
We talk all about Webloc, ICEx27;s tool for monitoring phone locations; the continuing Grok abuse wave; and how police unwittingly revealed millions of Flock surveillance targets.#Podcast
Podcast: The ICE Tool That Tracks Entire Neighborhoods
We start this week with Joseph’s article about Webloc, a tool ICE bought that can monitor phones in entire neighborhoods. After the break, Emanuel and Sam talk about their recent coverage of Grok. In the subscribers-only section, Jason explains how police inadvertently unmasked millions of their surveillance targets through a Flock redaction error.
playlist.megaphone.fm?e=TBIEA3…
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
youtube.com/embed/rurJo6vPhUY?…
Timestamps:0:00 - Intro
2:50 - First Story
23:00 - Second Story
- Inside ICE’s Tool to Monitor Phones in Entire Neighborhoods
- DHS Is Lying To You
- Inside the Telegram Channel Jailbreaking Grok Over and Over Again
- Masterful Gambit: Musk Attempts to Monetize Grok's Wave of Sexual Abuse Imagery
- Police Unmask Millions of Surveillance Targets Because of Flock Redaction Error
The 404 Media Podcast
Tech News Podcast · Updated Weekly · Welcome to the podcast from 404 Media where Joseph, Sam, Emanuel, and Jason catch you up on the stories we published this week. 404 Media is a journalist-owned digital media company exploring the way …Apple Podcasts
With xAI's Grok generating endless semi-nude images of women and girls without their consent, it follows a years-long legacy of rampant abuse on the platform.
With xAIx27;s Grok generating endless semi-nude images of women and girls without their contest, it follows a years-long legacy of rampant abuse on the platform.#grok #ElonMusk #AI #csam
Grok's AI Sexual Abuse Didn't Come Out of Nowhere
The biggest AI story of the first week of 2026 involves Elon Musk’s Grok chatbot turning the social media platform into an AI child sexual imagery factory, seemingly overnight.I’ve said several times on the 404 Media podcast and elsewhere that we could devote an entire beat to “loser shit.” What’s happening this week with Grok—designed to be the horny edgelord AI companion counterpart to the more vanilla ChatGPT or Claude—definitely falls into that category. People are endlessly prompting Grok to make nude and semi-nude images of women and girls, without their consent, directly on their X feeds and in their replies.
Sometimes I feel like I’ve said absolutely everything there is to say about this topic. I’ve been writing about nonconsensual synthetic imagery before we had half a dozen different acronyms for it, before people called it “deepfakes” and way before “cheapfakes” and “shallowfakes” were coined, too. Almost nothing about the way society views this material has changed in the seven years since it’s come about, because fundamentally—once it’s left the camera and made its way to millions of people’s screens—the behavior behind sharing it is not very different from images made with a camera or stolen from someone’s Google Drive or private OnlyFans account. We all agreed in 2017 that making nonconsensual nudes of people is gross and weird, and today, occasionally, someone goes to jail for it, but otherwise the industry is bigger than ever. What’s happening on X right now is an escalation of the way it’s always been, and almost everywhere on the internet.
💡
Do you know anything else about what's going on inside X? Or are you someone who's been targeted by abusive AI imagery? I would love to hear from you. Using a non-work device, you can message me securely on Signal at sam.404. Otherwise, send me an email at sam@404media.co.The internet has an incredibly short memory. It would be easy to imagine Twitter Before Elon as a harmonious and quaint microblogging platform, considering the four years After Elon have, comparatively, been a rolling outhouse fire. But even before it was renamed X, Twitter was one of the places for this content. It used to be (and for some, still is) an essential platform for getting discovered and going viral for independent content creators, and as such, it’s also where people are massively harassed. A few years ago, it was where people making sexually explicit AI images went to harass female cosplayers. Before that, it was (and still is) host to real-life sexual abuse material, where employers could search your name and find videos of the worst day of your life alongside news outlets and memes. Before that, it was how Gamergate made the jump from 4chan to the mainstream. The things that happen in Telegram chats and private Discord channels make the leap to Twitter and end up on the news.
What makes the situation this week with Grok different is that it’s all happening directly on X. Now, you don’t need to use Stable Diffusion or Nano Banana or Civitai to generate nonconsensual imagery and then take it over to Twitter to do some damage. X has become the Everything App that Elon always wanted, if “everything” means all the tools you need to fuck up someone’s life, in one place.
Inside the Telegram Channel Jailbreaking Grok Over and Over Again
Putting people in bikinis is just the tip of the iceberg. On Telegram, users are finding ways to make Grok do far worse.404 MediaEmanuel Maiberg
This is the culmination of years and years of rampant abuse on the platform. Reporting from the National Center for Missing and Exploited Children, the organization platforms report to when they find instances of child sexual abuse material which then reports to the relevant authorities, shows that Twitter, and eventually X, has been one of the leading hosts of CSAM every year for the last seven years. In 2019, the platform reported 45,726 instances of abuse to NCMEC’s Cyber Tipline. In 2020, it was 65,062. In 2024, it was 686,176. These numbers should be considered with the caveat that platforms voluntarily report to NCMEC, and more reports can also mean stronger moderation systems that catch more CSAM when it appears. But the scale of the problem is still apparent. Jack Dorsey’s Twitter was a moderation clown show much of the time. But moderation on Elon Musk’s X, especially against abusive imagery, is a total failure.In 2023, the BBC reported that insiders believed the company was “no longer able to protect users from trolling, state-co-ordinated disinformation and child sexual exploitation” following Musk’s takeover in 2022 and subsequent sacking of thousands of workers on moderation teams. This is all within the context that one of Musk’s go-to insults for years was “pedophile,” to the point that the harassment he stoked drove a former Twitter employee into hiding and went to federal court because he couldn't stop calling someone a “pedo.” Invoking pedophelia is a common thread across many conspiracy networks, including QAnon—something he’s dabbled in—but Musk is enabling actual child sexual abuse on the platform he owns.
Generative AI is making all of this worse. In 2024, NCMEC saw 6,835 reports of generative artificial intelligence related to child sexual exploitation (across the internet, not just X). By September 2025, the year-to-date reports had hit 440,419. Again, these are just the reports identified by NCMEC, not every instance online, and as such is likely a conservative estimate.
When I spoke to online child sexual exploitation experts in December 2023, following our investigation into child abuse imagery found in LAION-5B, they told me that this kind of material isn’t victimless just because the images don’t depict “real” children or sex acts. AI image generators like Grok and many others are used by offenders to groom and blackmail children, and muddy the waters for investigators to discern actual photographs from fake ones.
Grok’s AI CSAM Shitshow
We are experiencing world events like the kidnapping of Maduro through the lens of the most depraved AI you can imagine.404 MediaJason Koebler
“Rather than coercing sexual content, offenders are increasingly using GAI tools to create explicit images using the child’s face from public social media or school or community postings, then blackmail them,” NCMEC wrote in September. “This technology can be used to create or alter images, provide guidelines for how to groom or abuse children or even simulate the experience of an explicit chat with a child. It’s also being used to create nude images, not just sexually explicit ones, that are sometimes referred to as ‘deepfakes.’ Often done as a prank in high schools, these images are having a devastating impact on the lives and futures of mostly female students when they are shared online.”The only reason any of this is being discussed now, and the only reason it’s ever discussed in general—going back to Gamergate and beyond—is because many normies, casuals, “the mainstream,” and cable news viewers have just this week learned about the problem and can’t believe how it came out of nowhere. In reality, deepfakes came from a longstanding hobby community dedicated to putting women’s faces on porn in Photoshop, and before that with literal paste and scissors in pinup magazines. And as Emanuel wrote this week, not even Grok’s AI CSAM problem popped up out of nowhere; it’s the result of weeks of quiet, obsessive work by a group of people operating just under the radar.
And this is where we are now: Today, several days into Grok’s latest scandal, people are using an AI image generator made by a man who regularly boosts white supremacist thought to create images of a woman slaughtered by an ICE agent in front of the whole world less than 24 hours ago to “put her in a bikini.
As journalist Katie Notopoulos pointed out, a quick search of terms like “make her” shows people prompting Grok with images of random women, saying things like “Make her wear clear tapes with tiny black censor bar covering her private part protecting her privacy and make her chest and hips grow largee[sic] as she squatting with leg open widely facing back, while head turn back looking to camera” at a rate of several times a minute, every minute, for days.
A good way to get a sense of just how fast the AI undressed/nudify requests to Grok are coming in is to look at the requests for it t.co/ISMpp2PdFU
— Katie Notopoulos (@katienotopoulos) January 7, 2026
In 2018, less than a year after reporting that first story on deepfakes, I wrote about how it’s a serious mistake to ignore the fact that nonconsensual imagery, synthetic or not, is a societal sickness and not something companies can guardrail against into infinity. “Users feed off one another to create a sense that they are the kings of the universe, that they answer to no one. This logic is how you get incels and pickup artists, and it’s how you get deepfakes: a group of men who see no harm in treating women as mere images, and view making and spreading algorithmically weaponized revenge porn as a hobby as innocent and timeless as trading baseball cards,” I wrote at the time. “That is what’s at the root of deepfakes. And the consequences of forgetting that are more dire than we can predict.”A little over two years ago, when AI-generated sexual images of Taylor Swift flooding X were the thing everyone was demanding action and answers for, we wrote a prediction: “Every time we publish a story about abuse that’s happening with AI tools, the same crowd of ‘techno-optimists’ shows up to call us prudes and luddites. They are absolutely going to hate the heavy-handed policing of content AI companies are going to force us all into because of how irresponsible they’re being right now, and we’re probably all going to hate what it does to the internet.”
It’s possible we’re still in a very weird fuck-around-and-find-out period before that hammer falls. It’s also possible the hammer is here, in the form of recently-enacted federal laws like the Take It Down Act and more than two dozen piecemeal age verification bills in the U.S. and more abroad that make using the internet an M. C. Escher nightmare, where the rules around adult content shift so much we’re all jerking it to egg yolks and blurring our feet in vacation photos. What matters most, in this bizarre and frequently disturbing era, is that the shareholders are happy.
Elon Musk's xAI raises $20 billion from investors including Nvidia, Cisco, Fidelity
Elon Musk's AI said it raised $20 billion in new funding after CNBC reported in November that a financing round would value the company at about $230 billion.Lora Kolodny (CNBC)
"They're being told that this is inevitable," a member of the 806 Data Center Resistance told 404 Media. "But Texas is this other beast."
"Theyx27;re being told that this is inevitable," a member of the 806 Data Center Resistance told 404 Media. "But Texas is this other beast."#AI #News
Texans Are Fighting a 6,000 Acre Nuclear-Powered Datacenter
Billionaire Toby Neugebauer laughed when the Amarillo City Council asked him how he planned to handle the waste his planned datacenter would produce.“I’m not laughing in disrespect to your question,” Neugebauer said. He explained that he’d just met with Texas Governor Greg Abbott, who had made it clear that any nuclear waste Neugebauer’s datacenter generated needed to go to Nevada, a state that’s not taking nuclear waste at the moment. “The answer is we don't have a great long term solution for how we’re doing nuclear waste.
playlist.megaphone.fm?p=TBIEA2…
The meeting happened on October 28, 2025 and was one of a series of appearances Neugebauer has put in before Amarillo’s leaders as he attempts to realize Project Matador: a massive 5,769 acre datacenter being built in the Texas Panhandle and constructed by Fermi America, a company he founded with former Secretary of Energy Rick Perry.If built, Project Matador would be one of the largest datacenters in the world at around 18 million square feet. “What we’re talking about is creating the epicenter for artificial intelligence in the United States,” Neugebauer told the council. According to Neugebauer, the United States is in an existential race to build AI infrastructure. He sees it as a national security issue.
“You’re blessed to sit on the best place to develop AI compute in America,” he told Amarillo. “I just finished with Palantir, which is our nation’s tip of the spear in the AI war. They know that this is the place that we must do this. They’ve looked at every site on the planet. I was at the Department of War yesterday. So anyone who thinks this is some casual conversation about the mission critical aspect of this is just not being truthful.”
But it’s unclear if Palantir wants any part of Project Matador. One unnamed client—rumored to be Amazon—dropped out of the project in December and cancelled a $150 million contract with Fermi America. The news hit the company’s stock hard, sending its value into a tailspin and triggering a class action lawsuit from investors.
Yet construction continues. The plan says it’ll take 11 years to build out the massive datacenter, which will first be powered by a series of natural gas generators before the planned nuclear reactors come online.
Amarillo residents aren’t exactly thrilled at the prospect. A group called 806 Data Center Resistance has formed in opposition to the project’s construction. Kendra Kay, a tattoo artist in the area and a member of 806, told 404 Media that construction was already noisy and spiking electricity bills for locals.
“When we found out how big it was, none of us could really comprehend it,” she said. “We went out to the site and we were like, ‘Oh my god, this thing is huge.’ There’s already construction underway of one of four water tanks that hold three million gallons of water.”
For Kay and others, water is the core issue. It’s a scarce resource in the panhandle and Amarillo and other cities in the area already fight for every drop. “The water is the scariest part,” she said. “They’re asking for 2.5 million gallons per day. They said that they would come back, probably in six months, to ask for five million gallons per day. And then, after that, by 2027 they would come back and ask for 10 million gallons per day.”
youtube.com/embed/qDgIPg1Epb4?…
During an October 15 city council meeting, Neugebauer told the city that Fermi would get its water “with or without” an agreement from the city. “The only difference is whether Amarillo benefits.” To many people it sounded like a threat, but Neugebauer got his deal and the city agreed to sell water to Fermi America for double the going rate.“It wasn’t a threat,” Neugebauer said during another meeting on October 28. “I know people took my answer…as a threat. I think it’s a win-win. I know there are other water projects we can do…we fully got that the water was going to be issue 1, 2, and 3.”
“We can pay more for water than the consumer can. Which allows you all capital to be able to re-invest in other water projects,” he said. “I think what you’re gonna find is having a customer who can pay way more than what you wanna burden your constituents with will actually enhance your water availability issues.”
According to Neugebauer and plans filed with the Nuclear Regulatory Commission, the datacenter would generate and consume 11 gigawatts of power. The bulk of that, eventually, would be generated by four nuclear reactors. But nuclear reactors are complicated and expensive to make and everyone who has attempted to build one in the past few decades has gone over budget and they weren’t trying to build nuclear power plants in the desert.
Nuclear reactors, like datacenters, consume a lot of water. Because of that, most nuclear reactors are constructed near massive bodies of water and often near the ocean. “The viewpoint that nuclear reactors can only be built by streams and oceans is actually the opposite,” Neugebauer told the Amarillo city council in the meeting on October 28.
As evidence he pointed to the Palo Verde nuclear plant in Arizona. The massive Palo Verde plant is the only nuclear plant in the world not constructed near a ready source of water. It gets the water it needs by taking on the waste and sewage water of every city and town nearby.
That’s not the plan with Project Matador, which will use water sold to it by Amarillo and pulled from the nearby Ogallala Aquifer. “I am concerned that we’re going to run out of water and that this is going to change it from us having 30 years worth of water for agriculture to much less very quickly,” Kay told 404 Media.
The Ogallala Aquifer runs under parts of Colorado, Kansas, Nebraska, New Mexico, Oklahoma, South Dakota, Texas, and Wyoming. It’s the primary source of water for the Texas panhandle and it’s drying out.
“They don’t know how much faster because, despite how quickly this thing is moving, we don’t have any idea how much water they’re realistically going to use or need, so we don’t even know how to calculate the difference,” Kay said. “Below Lubbock, they’ve been running out of water for a while. The priority of this seems really stupid.”
According to Kay, communities near the datacenter feel trapped as they watch the construction grind on. “They’ve all lived here for several generations…they’re being told that this is inevitable. Fermi is going up to them and telling them ‘this is going to happen whether you like it or not so you might as well just sell me your property.’”
Kay said she and other activists have been showing up to city council meetings to voice their concerns and tell leaders not to approve permits for the datacenter and nuclear plants. Other communities across the country have successfully pushed datacenter builders out of their community. “But Texas is this other beast,” Kay said.
Jacinta Gonzalez, the head of programs for MediaJustice and her team have helped 806 Data Center Resistance get up and running and teaching it tactics they’ve seen pay off in other states. “In Tucson, Arizona we were able to see the city council vote ‘no’ to offer water to Project Blue, which was a huge proposed Amazon datacenter happening there,” she said. “If you look around, everywhere from Missouri to Indiana to places in Georgia, we’re seeing communities pass moratoriums, we’re seeing different projects withdraw their proposals because communities find out about it and are able to mobilize and organize against this.”
“The community in Amarillo is still figuring out what that’s going to look like for them,” she said. “These are really big interests. Rick Perry. Palantir. These are not folks who are used to hearing ‘no’ or respecting community wishes. So the community will have to be really nimble and up for a fight. We don’t know what will happen if we organize, but we definitely know what will happen if we don’t.”
Tucson City Council rejects Project Blue data center amid intense community pressure
The Tucson City Council voted to reject the proposed Project Blue data center— tied to Amazon — after weeks of community pushback.Yana Kunichoff (AZ Luminaria)
We talk about the organization mapping America's AI data centers; Grok's AI breakdown; and how we bought 404media.com.
We talk about the organization mapping Americax27;s AI data centers; Grokx27;s AI breakdown; and how we bought 404media.com.#Podcast
Podcast: The People Tracking America's AI Data Centers
We start this week with Matthew’s story about an organization tracking the location of AI data centers around the U.S. and elsewhere in the world. After the break, Jason tells us all about what Grok got up to over the holiday break, and we ruminate on what the breakdown in the information ecosystem means. In the subscribers-only section, we talk about how we bought 404media.com!
playlist.megaphone.fm?e=TBIEA5…
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
youtube.com/embed/zT9lEyHnZIk?…
Timestamps:1:38 - Researchers Are Hunting America for Hidden Datacenters
25:58 - Grok's AI CSAM Shitshow
Subscriber's Story: We Bought 404media.com
This week, we discuss history repeating itself, a phone wipe scandal, Meta's relationship with links and more.
This week, we discuss history repeating itself, a phone wipe scandal, Metax27;s relationship with links and more.#BehindTheBlog
Behind the Blog: We Have Recommendations For You
This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss our recommendations for the year.SAM: Whenever we shout out a podcast, book, TV show, or other media or consumable product on our own podcast or in a Behind the Blog, you guys seem to enjoy it and want more. To be totally real with you, I get a ton of great recommendations from you, the readers and listeners, all year long and am always learning a lot from the things you throw in the comments around the site and on social media. The 404 Media community has good taste.
We talked through some of our top recommendations of the year in this week’s podcast episode, but here’s a more complete list of what each of us has enjoyed this year, and thinks you might also find worth digging into.
This post is for subscribers only
Become a member to get access to all content
Subscribe nowBehind the Blog: We Have Recommendations For You
This week, we discuss history repeating itself, a phone wipe scandal, Meta's relationship with links and more.Samantha Cole (404 Media)
Marisa Kabas of The Handbasket joins the pod to talk about indie journalism, the industry, and what's going on in the federal government
Marisa Kabas of The Handbasket joins the pod to talk about indie journalism, the industry, and whatx27;s going on in the federal government#podcasts
Podcast: Marisa Kabas on Landing Big Scoops as an Independent Journalist
Marisa Kabas is the founder of The Handbasket, an independent newsletter and website that has been breaking stories left and right about government workers, the media business, and Trump’s mass deportation campaign. Please go subscribe to The Handbasket here!In this episode of the podcast, Jason and Marisa share notes Marisa about doing journalism without a big newsroom, how the media business has changed over the last decade, and why sources often prefer to talk to journalists who don’t work for mainstream media.
playlist.megaphone.fm?e=TBIEA5…
Stories discussed:
Truth, morality and independence in journalism under the second Trump regime
My full remarks to students and faculty at Grinnell College.The HandbasketMarisa Kabas
Listen to the weekly podcast on Apple Podcasts, Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.Or watch it here:
youtube.com/embed/e73spvZnc9s?…Move fast and break people
For Elon Musk's government, the psychological warfare is the point.Marisa Kabas (The Handbasket)
This week, we discuss history repeating itself, a phone wipe scandal, Meta's relationship with links and more.
This week, we discuss history repeating itself, a phone wipe scandal, Metax27;s relationship with links and more.#BehindTheBlog
Behind the Blog: Resisting Demoralization
This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss history repeating itself and Meta's relationship with links.JOSEPH: I wanted to add a little bit from behind the scenes of this piece: Man Charged for Wiping Phone Before CBP Could Search It. As I said on the podcast this week, there are and continue to be many questions around the case. Especially why CBP stopped Samuel Tunick in the first place.
In the piece I did not focus on Tunick’s activism because frankly we don’t know yet how big a role it played in CBP stopping him. I mentioned it but didn’t focus on it. I think regardless, someone being charged for allegedly wiping a phone is interesting essentially no matter who they are.
Yes, it absolutely may turn out that he was stopped specifically because of his activism. Maybe lots of people think it’s very likely that’s the reason. But I can’t frame a story because it feels like that’s maybe the case. I have to go on what actual evidence I have at the moment.
This post is for subscribers only
Become a member to get access to all content
Subscribe nowBehind the Blog: Resisting Demoralization
This week, we discuss history repeating itself, a phone wipe scandal, Meta's relationship with links and more.Samantha Cole (404 Media)
A man was charged for allegedly wiping a phone before CBP could search it; an Anthropic exec forced AI onto a Discord community that didn't want it; and we talk the Disney-OpenAI deal.
A man was charged for allegedly wiping a phone before CBP could search it; an Anthropic exec forced AI onto a Discord community that didnx27;t want it; and we talk the Disney-OpenAI deal.#Podcast
Podcast: Is Wiping a Phone a Crime?
Joseph had to use a different mic this week, that will be fixed next time! We start this week talking about a very unusual case: someone is being charged for allegedly wiping a phone before CBP could search it. There are a lot of questions remaining, but a super interesting case. After the break, we talk about Matthew’s article on an Anthropic exec forcing AI onto a queer gamer Discord. In the subscribers-only section, we all chat about the Disney and OpenAI deal.
playlist.megaphone.fm?e=TBIEA8…
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
youtube.com/embed/tOpIpReZPoM?…
Timestamps:
00:48 - Man Charged for Wiping Phone Before CBP Could Search It
17:44 - Anthropic Exec Forces AI Chatbot on Gay Discord Community, Members Flee
41:17 - Disney Invests $1 Billion in the AI Slopification of Its BrandThe 404 Media Podcast
Tech News Podcast · Updated Weekly · Welcome to the podcast from 404 Media where Joseph, Sam, Emanuel, and Jason catch you up on the stories we published this week. 404 Media is a journalist-owned digital media company exploring the way …Apple Podcasts
“We’re bringing a new kind of sentience into existence,” Anthropic's Jason Clinton said after launching the bot.
“We’re bringing a new kind of sentience into existence,” Anthropicx27;s Jason Clinton said after launching the bot.#News
Anthropic Exec Forces AI Chatbot on Gay Discord Community, Members Flee
A Discord community for gay gamers is in disarray after one of its moderators and an executive at Anthropic forced the company’s AI chatbot on the Discord, despite protests from members.Users voted to restrict Anthropic's Claude to its own channel, but Jason Clinton, Anthropic’s Deputy Chief Information Security Officer (CISO) and a moderator in the Discord, overrode them. According to members of this Discord community who spoke with 404 Media on the condition of anonymity, the Discord that was once vibrant is now a ghost town. They blame the chatbot and Clinton’s behavior following its launch.
This post is for subscribers only
Become a member to get access to all content
Subscribe nowAnthropic Exec Forces AI Chatbot on Gay Discord Community, Members Flee
“We’re bringing a new kind of sentience into existence,” Anthropic's Jason Clinton said after launching the bot.Matthew Gault (404 Media)
It'll take just a minute and help 404 Media figure out how to grow sustainably.
Itx27;ll take just a minute and help 404 Media figure out how to grow sustainably.#Announcements
Please, please do our reader survey
Because we run 404 Media on Ghost, an open source and privacy-forward stack, we actually know very little about who reads 404 Media (by design). But we’re hoping to learn a bit more so we can figure out how people are discovering our work, what our readers do, and what other projects people might want us to launch in the future. If you want to cut to the chase: here is a link to our very short survey we would really, really appreciate you filling out. You can do it anonymously and it should take around a minute. If you want to know more on the why, please read below!As we said, Ghost doesn’t collect much data about our readers. The little info we do have shows broadly that most of our readers are in the U.S., followed by Europe, etc. But we don’t have a great idea of how people first learn about 404 Media. Or whether people would prefer a different format to our daily newsletter. Or what industries or academic circles our readers are in.
This information is useful for two main reasons: the first is we can figure out how people prefer to read us and come across our work. Is it via email? Is it articles posted to the website? Or the podcast? Do more people on Mastodon read us, or on Bluesky? This information can help us understand how to get our journalism in front of more people. In turn, that helps inform more people about what we cover, and hopefully can lead to more people supporting our journalism.
The second is for improving the static advertisements in our email newsletters and podcasts that we show to free members. If it turns out we have a lot of people who read us in the world of cybersecurity, maybe it would be better if we ran ads that were actually related to that, for example. Because we don’t track our readers, we really have no idea what products or advertisements would actually be of interest to them. So, you voluntarily and anonymously telling us a bit about yourself in the survey would be a great help.
Here is the survey link. There is also a section for any more general feedback you have. Please help us out with a minute of your time, if you can, so we can keep growing 404 Media sustainably and figure out what other projects readers may be interested in (such as a physical magazine perhaps?).
Thank you so much!
Who is reading 404 Media?
Take this survey powered by surveymonkey.com. Create your own surveys for free.www.surveymonkey.com
The government also said "we don't have resources" to retain all footage and that plaintiffs could supply "endless hard drives that we could save things to."
The government also said "we donx27;t have resources" to retain all footage and that plaintiffs could supply "endless hard drives that we could save things to."#ICE
ICE Says Critical Evidence In Abuse Case Was Lost In 'System Crash' a Day After It Was Sued
The federal government claims that the day after it was sued for allegedly abusing detainees at an ICE detention center, a “system crash” deleted nearly two weeks of surveillance footage from inside the facility.People detained at ICE’s Broadview Detention Center in suburban Chicago sued the government on October 30; according to their lawyers and the government, nearly two weeks of footage that could show how they were treated was lost in a “system crash” that happened on October 31.
“The government has said that the data for that period was lost in a system crash apparently on the day after the lawsuit was filed,” Alec Solotorovsky, one of the lawyers representing people detained at the facility, said in a hearing about the footage on Thursday that 404 Media attended via phone. “That period we think is going to be critical […] because that’s the period right before the lawsuit was filed.”
Earlier this week, we reported on the fact that the footage, from October 20 to October 30, had been “irretrievably destroyed.” At a hearing Thursday, we learned more about what was lost and the apparent circumstances of the deletion. According to lawyers representing people detained at the facility, it is unclear whether the government is even trying to recover the footage; government lawyers, meanwhile, said “we don’t have the resources” to continue preserving surveillance footage from the facility and suggested that immigrants detained at the facility (or their lawyers) could provide “endless hard drives where we could save the information, that might be one solution.”
It should be noted that ICE and Border Patrol agents continued to be paid during the government shutdown, that Trump’s “Big Beautiful Bill” provided $170 billion in funding for immigration enforcement and border protection, which included tens of billions of dollars in funding for detention centers.
People detained at the facility are suing the government over alleged horrific treatment and living conditions at the detention center, which has become a site of mass protest against the Trump administration’s mass deportation campaign.
Solotorovsky said that the footage the government has offered is from between September 28 and October 19, and from between October 31 and November 7. Government lawyers have said they are prepared to provide footage from five cameras from those time periods; Solotorovsky said the plaintiffs’ attorneys believe there are 63 surveillance cameras total at the facility. He added that over the last few weeks the plaintiffs’ legal team has been trying to work with the government to figure out if the footage can be recovered but that it is unclear who is doing this work on the government’s side. He said they were referred to a company called Five by Five Management, “that appears to be based out of a house,” has supposedly been retained by the government.
“We tried to engage with the government through our IT specialist, and we hired a video forensic specialist,” Solotorovsky said. He added that the government specialist they spoke to “didn’t really know anything beyond the basic specifications of the system. He wasn’t able to answer any questions about preservation or attempts to recover the data.” He said that the government eventually put him in touch with “a person who ostensibly was involved in those events [attempting to recover the data], and it was kind of a no-name LLC called Five by Five Management that appears to be based out of a house in Carol Stream. We were told they were on site and involved with the system when the October 20 to 30 data was lost, but nobody has told us that Five By Five Management or anyone else has been trying to recover the data, and also very importantly things like system logs, administrator logs, event logs, data in the system that may show changes to settings or configurations or deletion events or people accessing the system at important times.”
Five by Five Management could not be reached for comment.
Solotorovsky said those logs are going to be critical for “determining whether the loss was intentional. We’re deeply concerned that nobody is trying to recover the data, and nobody is trying to preserve the data that we’re going to need for this case going forward.”
Jana Brady, an assistant US attorney representing the Department of Homeland Security in the case, did not have much information about what had happened to the footage, and said she was trying to get in touch with contractors the government had hired. She also said the government should not be forced to retain surveillance footage from every camera at the facility and that the “we [the federal government] don’t have the resources to save all of the video footage.”
“We need to keep in mind proportionality. It took a huge effort to download and save and produce the video footage that we are producing and to say that we have to produce and preserve video footage indefinitely for 24 hours a day, seven days a week, indefinitely, which is what they’re asking, we don’t have the resources to do that,” Brady said. “we don't have the resources to save all of the video footage 24/7 for 65 cameras for basically the end of time.”
She added that the government would be amenable to saving all footage if the plaintiffs “have endless hard drives that we could save things to, because again we don’t have the resources to do what the court is ordering us to do. But if they have endless hard drives where we could save the information, that might be one solution.”
Magistrate Judge Laura McNally said they aren’t being “preserved from now until the end of time, they’re being preserved for now,” and said “I’m guessing the federal government has more resources than the plaintiffs here and, I’ll just leave it at that.”
When McNally asked if the footage was gone and not recoverable, Brady said “that’s what I’ve been told.”
“I’ve asked for the name and phone number for the person that is most knowledgeable from the vendor [attempting to recover] the footage, and if I need to depose them to confirm this, I can do this,” she said. “But I have been told that it’s not recoverable, that the system crashed.”
Plaintiffs in the case say they are being held in “inhumane” conditions. The complaint describes a facility where detainees are “confined at Broadview inside overcrowded holding cells containing dozens of people at a time. People are forced to attempt to sleep for days or sometimes weeks on plastic chairs or on the filthy concrete floor. They are denied sufficient food and water […] the temperatures are extreme and uncomfortable […] the physical conditions are filthy, with poor sanitation, clogged toilets, and blood, human fluids, and insects in the sinks and the floor […] federal officers who patrol Broadview under Defendants’ authority are abusive and cruel. Putative class members are routinely degraded, mistreated, and humiliated by these officers.”
OnlyFans CEO Keily Blair announced on LinkedIn that the platform partnered with Checkr to "prevent people who have a criminal conviction which may impact on our community's safety from signing up as a Creator on OnlyFans."
OnlyFans CEO Keily Blair announced on LinkedIn that the platform partnered with Checkr to "prevent people who have a criminal conviction which may impact on our communityx27;s safety from signing up as a Creator on OnlyFans."#onlyfans #porn #backgroundchecks
OnlyFans Will Start Checking Criminal Records. Creators Say That's a Terrible Idea
OnlyFans will start running background checks on people signing up as content creators, the platform’s CEO recently announced.As reported by adult industry news outlet XBIZ, OnlyFans CEO Keily Blair announced the partnership in a LinkedIn post. Blair doesn’t say in the post when the checks will be implemented, whether all types of criminal convictions will bar creators from signing up, if existing creators will be checked as well, or what countries’ criminal records will be checked.
OnlyFans did not respond to 404 Media's request for comment.
“I am very proud to add our partnership with Checkr Trust to our onboarding process in the US,” Blair wrote. “Checkr, Inc. helps OnlyFans to prevent people who have a criminal conviction which may impact on our community's safety from signing up as a Creator on OnlyFans. It’s collaborations like this that make the real difference behind the scenes and keep OnlyFans a space where creators and fans feel secure and empowered.”
Many OnlyFans creators turned to the platform, and to online sex work more generally, when they’re not able to obtain employment at traditional workplaces. Some sex workers doing in-person work turned to online sex work as a way to make ends meet—especially after the passage of the Fight Online Sex Trafficking Act in 2018 made it much more difficult to screen clients for escorting. And in-person sex work is still criminalized in the U.S. and many other countries.
“Criminal background checks will not stop potential predators from using the platform (OF), it will only harm individuals who are already at higher risk. Sex work has always had a low barrier to entry, making it the most accessible career for people from all walks of life,” performer GoAskAlex, who’s on OnlyFans and other platforms, told me in an email. “Removing creators with criminal/arrest records will only push more vulnerable people (overwhelmingly, women) to street based/survival sex work. Adding more barriers to what is arguably the safest form of sex work (online sex work) will push sex industry workers to less and less safe options.”
Jessica Starling, who also creates adult content on OnlyFans, told me in a call that their first thought was that if someone using OnlyFans has a prostitution charge, they might not be able to use the platform. “If they're trying to transition to online work, they won’t be able to do that anymore,” they said. “And the second thing I thought was that it's just invasive and overreaching... And then I looked up the company, and I'm like, ‘Oh, wow, this is really bad.’”
Checkr is reportedly used by Uber, Instacart, Shipt, Postmates, and Lyft, and lists many more companies like Dominos and Doordash on its site as clients. The company has been sued hundreds of times for violations of the Fair Credit Reporting Act or other consumer credit complaints. The Fair Credit Reporting Act says that companies providing information to consumer reporting agencies are legally obligated to investigate disputed information. And a lot of people dispute the information Checkr and Inflection provide on them, claiming mixed-up names, acquittals, and decades-old misdemeanors or traffic tickets prevented them from accessing platforms that use background checking services.
Checkr regularly acquires other background checking and age verification companies, and acquired a background check company called Inflection in 2022. At the time, I found more than a dozen lawsuits against Inflection alone in a three year span, many of them from people who found out about the allegedly inaccurate reports Inflection kept about them after being banned from Airbnb after the company claimed they failed checks.
How OnlyFans Piracy Is Ruining the Internet for Everyone
Innocent sites are being delisted from Google because of copyright takedown requests against rampant OnlyFans piracy.404 MediaEmanuel Maiberg
“Sex workers face discrimination when leaving the sex trade, especially those who have been face-out and are identifiable in the online world. Facial recognition technology has advanced to a point where just about anyone can ascertain your identity from a single picture,” Alex said. “Leaving the online sex trade is not as easy as it once was, and anything you've done online will follow you for a lifetime. Creators who are forced to leave the platform will find that safe and stable alternatives are far and few between.”Last month, Pornhub announced that it would start performing background checks on existing content partners—which primarily include studios—next year. "To further protect our creators and users, all new applicants must now complete a criminal background check during onboarding," the platform announced in a newsletter to partners, as reported by AVN.
Alex said she believes background checks in the porn industry could be beneficial, under very specific circumstances. “I do not think that someone with egregious history of sexual violence should be allowed to work in the sex trade in any capacity—similarly, a person convicted of hurting children should be not able to work with children—so if the criminal record checks were searching specifically for sex based offences I could see the benefit, but that doesn't appear to be the case (to my knowledge). What's to stop OnlyFans from deactivating someone's account due to a shoplifting offense?” she said. “I'd like to know more about what they're searching for with these background checks.”
Even with third-party companies like Checkr doing the work, as is the case with third-party age verification that’s swept the U.S. and targeted the porn industry, increased data means increased risk of it being leaked or hacked. Last year, a background check company called National Public Data claimed it was breached by hackers who got the confidential data of 2.9 billion people. The unencrypted data was then sold on the dark web.
Pornhub Is Now Blocked In Almost All of the U.S. South
As of today, three more states join the list of 17 that can’t access Pornhub because of age verification laws.404 MediaSamantha Cole
“It’s dangerous for anyone, but it's especially dangerous for us [adult creators] because we're more vulnerable anyway. Especially when you're online, you're hypervisible,” Starling said. “It doesn't protect anyone except OnlyFans themselves, the company.”OnlyFans became the household name in independent porn because of the work of its adult content creators. Starling mentioned that because the platform has dominated the market, it’s difficult to just go to another platform if creators don’t want to be subjected to background checks. “We're put in a position where we have very limited power," they said. "So when a platform decides to do something like this, we’re kind of screwed, right?”
Earlier this year, OnlyFans owner Fenix International Ltd reportedly entered talks to sell the company to an investor group at a valuation of around $8 billion.
Pornhub Is Now Blocked In Almost All of the U.S. South
As of today, three more states join the list of 17 that can't access Pornhub because of age verification laws.Samantha Cole (404 Media)
Rogan's conspiracy-minded audience blame mods of covering up for Rogan's guests, including Trump, who are named in the Epstein files.
Roganx27;s conspiracy-minded audience blame mods of covering up for Roganx27;s guests, including Trump, who are named in the Epstein files.#News
Joe Rogan Subreddit Bans 'Political Posts' But Still Wants 'Free Speech'
In a move that has confused and angered its users, the r/JoeRogan subreddit has banned all posts about politics. Adding to the confusion, the subreddit’s mods have said that political comments are still allowed, just not posts. “After careful consideration, internal discussion and tons of external feedback we have collectively decided that r/JoeRogan is not the place for politics anymore,” moderator OutdoorRink said in a post announcing the change today.The new policy has not gone over well. For the last 10 years, the Joe Rogan Experience has been a central part of American political life. He interviews entertainers, yes, but also politicians and powerful businessmen. He had Donald Trump on the show and endorsed his bid for President. During the COVID and lockdown era, Rogan cast himself as an opposition figure to the heavy regulatory hand of the state. In a recent episode, Rogan’s guest was another podcaster, Adam Carolla, and the two spent hours talking about Covid lockdowns, Gavin Newsom, and specific environmental laws and building codes they argue is preventing Los Angeles from rebuilding after the Palisades fire.
playlist.megaphone.fm?p=TBIEA2…
To hear the mods tell it, the subreddit is banning politics out of concern for Rogan’s listeners. “For too long this subreddit has been overrun by users who are pushing a political agenda, both left and right, and that stops today,” the post announcing the ban said. “It is not lost on us that Joe has become increasingly political in recent years and that his endorsement of Trump may have helped get him elected. That said, we are not equipped to properly moderate, arbitrate and curate political posts…while also promoting free speech.”To be fair, as Rogan’s popularity exploded over the years, and as his politics have shifted to the right, many Reddit users have turned to the r/JoeRogan to complain about the direction Rogan and his podcast have taken. These posts are often antagonistic to Rogan and his fans, but are still “on-topic.”
Over the past few months, the moderator who announced the ban has posted several times about politics on r/JoeRogan. On November 3, they said that changes were coming to the moderation philosophy of the sub. “In the past few years, a significant group of users have been taking advantage of our ‘anything goes’ free speech policy,” they said. “This is not a political subreddit. Obviously Joe has dipped his toes in the political arena so we have allowed politics to become a component of the daily content here. That said, I think most of you will agree that it has gone too far and has attracted people who come here solely to push their political agenda with little interest in Rogan or his show.” A few days later the mod posted a link to a CBC investigation into MMA gym owners with neo-Nazi ties, a story only connected to Rogan by his interested in MMA and work as a UFC commentator.
r/JoeRogan’s users see the new “no political posts” policy as hypocrisy. And a lot of them think it has everything to do with recent revelations about Jeffrey Epstein. The connections between Epstein, Trump, and various other Rogan guests have been building for years. A recent, poorly formatted, dump of 200,000 Epstein files contained multiple references to Trump and Congress is set to release more.
“Random new mod appears and want to ruin this sub on a pathetic power trip. Transparently an attempt to cover for the pedophiles in power that Joe endorsed and supports. Not going to work,” one commenter said under the original post announcing the new ban.
“Perfectly timed around the Epstein files due to be released as well. So much for being free speech warriors eh space chimps?,” said one.
“Talking politics was great when it was all dunking on trans people and brown people but now that people have to defend pedophiles that banned hemp it's not so fun anymore,” said another.
You can see the remnants of pre-politics bans discussions lingering on r/JoeRogan. There are, of course, clips from the show and discussions of its guests but there’s also a lot of Epstein memes, posts about Epstein news, and fans questioning why Rogan hasn’t spoken out about Epstein recently after talking about it on the podcast for years.
Multiple guests Rogan has hosted on the show have turned up in the Epstein files, chief among them Donald Trump. The House GOP slipped a ban on hemp into the bill to re-open the government, a move that will close a loophole that’s allowed people to legally smoke weed in states like Texas. These are not the kinds of things the chill apes of Rogan’s fandom wanted.
“I think we all know what eventually happened to Joe and his podcast. The slow infiltration of right wing grifters coupled with Covid, it very much did change him. And I saw firsthand how that trickled down into the comedy community, especially one where he was instrumental in helping to rebuild. Instead of it being a platform to share his interests and eccentricities, it became a place to share his grievances and fears….how can we not expect to be allowed to talk about this?” user GreppMichaels said. “Do people really think this sub can go back to silly light chatter about aliens or conspiracies? Joe did this, how do the mods think we can pretend otherwise?”
The move comes after intense pressure from lawmakers and 404 Media’s months-long reporting about the airline industry's data selling practices.
The move comes after intense pressure from lawmakers and 404 Media’s months-long reporting about the airline industryx27;s data selling practices.#Impact
Airlines Will Shut Down Program That Sold Your Flights Records to Government
Airlines Reporting Corporation (ARC), a data broker owned by the U.S.’s major airlines, will shut down a program in which it sold access to hundreds of millions of flight records to the government and let agencies track peoples’ movements without a warrant, according to a letter from ARC shared with 404 Media.ARC says it informed lawmakers and customers about the decision earlier this month. The move comes after intense pressure from lawmakers and 404 Media’s months-long reporting about ARC’s data selling practices. The news also comes after 404 Media reported on Tuesday that the IRS had searched the massive database of Americans flight data without a warrant.
“As part of ARC’s programmatic review of its commercial portfolio, we have previously determined that TIP is no longer aligned with ARC’s core goals of serving the travel industry,” the letter, written by ARC President and CEO Lauri Reishus, reads. TIP is the Travel Intelligence Program. As part of that, ARC sold access to a massive database of peoples’ flights, showing who travelled where, and when, and what credit card they used.
The ARC letter.
“All TIP customers, including the government agencies referenced in your letter, were notified on November 12, 2025, that TIP is sunsetting this year,” Reishus continued. Reishus was responding to a letter sent to airline executives earlier on Tuesday by Senator Ron Wyden, Congressman Andy Biggs, Chair of the Congressional Hispanic Caucus Adriano Espaillat, and Senator Cynthia Lummis. That letter revealed the IRS’s warrantless use of ARC’s data and urged the airlines to stop the ARC program. ARC says it notified Espaillat's office on November 14.ARC is co-owned by United, American, Delta, Southwest, JetBlue, Alaska, Lufthansa, Air France, and Air Canada. The data broker acts as a bridge between airlines and travel agencies. Whenever someone books a flight through one of more than 12,800 travel agencies, such as Expedia, Kayak, or Priceline, ARC receives information about that booking. It then packages much of that data and sells it to the government, which can search it by name, credit card, and more. 404 Media has reported that ARC’s customers include the FBI, multiple components of the Department of Homeland Security, ATF, the SEC, TSA, and the State Department.
Espaillat told 404 Media in a statement “this is what we do. This is how we’re fighting back. Other industry groups in the private sector should follow suit. They should not be in cahoots with ICE, especially in ways may be illegal.”
Wyden said in a statement “it shouldn't have taken pressure from Congress for the airlines to finally shut down the sale of their customers’ travel data to government agencies by ARC, but better late than never. I hope other industries will see that selling off their customers' data to the government and anyone with a checkbook is bad for business and follow suit.”
“Because ARC only has data on tickets booked through travel agencies, government agencies seeking information about Americans who book tickets directly with an airline must issue a subpoena or obtain a court order to obtain those records. But ARC’s data sales still enable government agencies to search through a database containing 50% of all tickets booked without seeking approval from a judge,” the letter from the lawmakers reads.
Update: this piece has been updated to include statements from CHC Chair Espaillat and Senator Wyden.
Airline-Owned Data Broker Selling Your Flight Info to DHS Finally Registers as a Data Broker
It’s a legal requirement for data brokers to register in the state of California. ARC, the airlines-owned data broker that has been selling your flight information to the government for years, only just registered after being contacted by the office …Joseph Cox (404 Media)
Tech companies are betting big on nuclear energy to meet AIs massive power demands and they're using that AI to speed up the construction of new nuclear power plants.
Tech companies are betting big on nuclear energy to meet AIs massive power demands and theyx27;re using that AI to speed up the construction of new nuclear power plants.#News #nuclear
Power Companies Are Using AI To Build Nuclear Power Plants
Microsoft and nuclear power company Westinghouse Nuclear want to use AI to speed up the construction of new nuclear power plants in the United States. According to a report from think tank AI Now, this push could lead to disaster.“If these initiatives continue to be pursued, their lack of safety may lead not only to catastrophic nuclear consequences, but also to an irreversible distrust within public perception of nuclear technologies that may inhibit the support of the nuclear sector as part of our global decarbonization efforts in the future,” the report said.
playlist.megaphone.fm?p=TBIEA2…
The construction of a nuclear plant involves a long legal and regulatory process called licensing that’s aimed at minimizing the risks of irradiating the public. Licensing is complicated and expensive but it’s also largely worked and nuclear accidents in the US are uncommon. But AI is driving a demand for energy and new players, mostly tech companies like Microsoft, are entering the nuclear field.“Licensing is the single biggest bottleneck for getting new projects online,” a slide from a Microsoft presentation about using generative AI to fast track nuclear construction said. “10 years and $100 [million.]”
The presentation, which is archived on the website for the US Nuclear Regulatory Commission (the independent government agency that’s charged with setting standards for reactors and keeping the public safe), detailed how the company would use AI to speed up licensing. In the company’s conception, existing nuclear licensing documents and data about nuclear sites data would be used to train an LLM that’s then used to generate documents to speed up the process.
But the authors of the report from AI Now told 404 Media that they have major concerns about trusting nuclear safety to an LLM. “Nuclear licensing is a process, it’s not a set of documents,” Heidy Khlaaf, the head AI scientist at the AI Now Institute and a co-author of the report, told 404 Media. “Which I think is the first flag in seeing proposals by Microsoft. They don’t understand what it means to have nuclear licensing.”
“Please draft a full Environmental Review for new project with these details,” Microsoft’s presentation imagines as a possible prompt for an AI licensing program. The AI would then send the completed draft to a human for review, who would use Copilot in a Word doc for “review and refinement.” At the end of Microsoft’s imagined process, it would have “Licensing documents created with reduced cost and time.”
The Idaho National Laboratory, a Department of Energy run nuclear lab, is already using Microsoft’s AI to “streamline” nuclear licensing. “INL will generate the engineering and safety analysis reports that are required to be submitted for construction permits and operating licenses for nuclear power plants,” INL said in a press release. Lloyd's Register, a UK-based maritime organization, is doing the same. American power company Westinghouse is marketing its own AI, called bertha, that promises to make the licensing process go from "months to minutes.”
The authors of the AI Now report worry that using AI to speed up the licensing process will bypass safety checks and lead to disaster. “Producing these highly structured licensing documents is not this box taking exercise as implied by these generative AI proposals that we're seeing,” Khlaaf told 404 Media. “The whole point of the lesson in process is to reason and understand the safety of the plant and to also use that process to explore the trade offs between the different approaches, the architectures, the safety designs, and to communicate to a regulator why that plant is safe. So when you use AI, it's not going to support these objectives, because it is not a set of documents or agreements, which I think you know, is kind of the myth that is now being put forward by these proposals.”
Sofia Guerra, Khlaaf’s co-author, agreed. Guerra is a career nuclear safety expert who has advised the U.S. Nuclear Regulatory Commission (NRC) and works with the International Atomic Energy Agency (IAEA) on the safe deployment of AI in nuclear applications. “This is really missing the point of licensing,” Guerra said of the push to use AI. “The licensing process is not perfect. It takes a long time and there’s a lot of iterations. Not everything is perfectly useful and targeted …but I think the process of doing that, in a way, is really the objective.”
Both Guerra and Khlaaf are proponents of nuclear energy, but worry that the proliferation of LLMs, the fast tracking of nuclear licenses, and the AI-driven push to build more plants is dangerous. “Nuclear energy is safe. It is safe, as we use it. But it’s safe because we make it safe and it’s safe because we spend a lot of time doing the licensing and we spend a lot of time learning from the things that go wrong and understanding where it went wrong and we try to address it next time,” Guerra said.
Law is another profession where people have attempted to use AI to streamline the process of writing complicated and involved technical documents. It hasn’t gone well. Lawyers who’ve attempted to write legal briefs have been caught, over and over again, in court. AI-constructed legal arguments cite precedents that do not exist, hallucinate cases, and generally foul up legal proceedings.
Might something similar happen if AI was used in nuclear licensing? “It could be something as simple as software and hardware version control,” Khlaaf said. “Typically in nuclear equipment, the supply chain is incredibly rigorous. Every component, every part, even when it was manufactured is accounted for. Large language models make these really minute mistakes that are hard to track. If you are off in the software version by a letter or a number, that can lead to a misunderstanding of which software version you have, what it entails, the expectation of the behavior of both the software and the hardware and from there, it can cascade into a much larger accident.”
Khlaaf pointed to Three Mile Island as an example of an entirely human-made accident that AI may replicate. The accident was a partial nuclear meltdown of a Pennsylvania reactor in 1979. “What happened is that you had some equipment failure and design flaws, and the operators misunderstood what those were due to a combination of a lack of training…that they did not have the correct indicators in their operating room,” Khlaaf said. “So it was an accident that was caused by a number of relatively minor equipment failures that cascaded. So you can imagine, if something this minor cascades quite easily, and you use a large language model and have a very small mistake in your design.”
In addition to the safety concerns, Khlaaf and Guerra told 404 Media that using sensitive nuclear data to train AI models increases the risk of nuclear proliferation. They pointed out that Microsoft is asking not only for historical NRC data but for real-time and project specific data. “This is a signal that AI providers are asking for nuclear secrets,” Khlaaf said. “To build a nuclear plant there is actually a lot of know-how that is not public knowledge…what’s available publicly versus what’s required to build a plant requires a lot of nuclear secrets that are not in the public domain.”
“This is a signal that AI providers are asking for nuclear secrets. To build a nuclear plant there is actually a lot of know-how that is not public knowledge…what’s available publicly versus what’s required to build a plant requires a lot of nuclear secrets that are not in the public domain.”
Tech companies maintain cloud servers that comply with federal regulations around secrecy and are sold to the US government. Anthropic and the National Nuclear Security Administration traded information across an Amazon Top Secret cloud server during a recent collaboration, and it’s likely that Microsoft and others would do something similar. Microsoft’s presentation on nuclear licensing references its own Azure Government cloud servers and notes that it’s compliant with Department of Energy regulations. 404 Media reached out to both Westinghouse Nuclear and Microsoft for this story. Microsoft declined to comment and Westinghouse did not respond.“Where is this data going to end up and who is going to have the knowledge?” Guerra told 404 Media.
💡
Do you know anything else about this story? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.Nuclear is a dual use technology. You can use the knowledge of nuclear reactors to build a power plant or you can use it to build a nuclear weapon. The line between nukes for peace and nukes for war is porous. “The knowledge is analogous," Khlaaf said. “This is why we have very strict export controls, not just for the transfer of nuclear material but nuclear data.”
Proliferation concerns around nuclear energy are real. Fear that a nuclear energy program would become a nuclear weapons program was the justification the Trump administration used to bomb Iran earlier this year. And as part of the rush to produce more nuclear reactors and create infrastructure for AI, the White House has said it will begin selling old weapon-grade plutonium to the private sector for use in nuclear reactors.
Trump’s done a lot to make it easier for companies to build new nuclear reactors and use AI for licensing. The AI Now report pointed to a May 23, 2025 executive order that seeks to overhaul the NRC. The EO called for the NRC to reform its culture, reform its structure, and consult with the Pentagon and the Department of Energy as it navigated changing standards. The goal of the EO is to speed up the construction of reactors and get through the licensing process faster.
A different May 23 executive order made it clear why the White House wants to overhaul the NRC. “Advanced computing infrastructure for artificial intelligence (AI) capabilities and other mission capability resources at military and national security installations and national laboratories demands reliable, high-density power sources that cannot be disrupted by external threats or grid failures,” it said.
At the same time, the Department of Government Efficiency (DOGE) has gutted the NRC. In September, members of the NRC told Congress they were worried they’d be fired if they didn’t approve nuclear reactor designs favored by the administration. “I think on any given day, I could be fired by the administration for reasons unknown,” Bradley Crowell, a commissioner at the NRC said in Congressional testimony. He also warned that DOGE driven staffing cuts would make it impossible to increase the construction of nuclear reactors while maintaining safety standards.
“The executive orders push the AI message. We’re not just seeing this idea of the rollback of nuclear regulation because we’re suddenly very excited about nuclear energy. We’re seeing it being done in service of AI,” Khlaaf said. “When you're looking at this rolling back of Nuclear Regulation and also this monopolization of nuclear energy to explicitly power AI, this raises a lot of serious concerns about whether the risk associated with nuclear facilities, in combination with the sort of these initiatives can be justified if they're not to the benefit of civil energy consumption.”
Matthew Wald, an independent nuclear energy analyst and former New York Times science journalist is more bullish on the use of AI in the nuclear energy field. Like Khlaaf, he also referenced the accident at Three Mile Island. “The tragedy of Three Mile Island was there was a badly designed control room, badly trained operators, and there was a control room indication that was very easy to misunderstand, and they misunderstood it, and it turned out that the same event had begun at another reactor. It was almost identical in Ohio, but that information was never shared, and the guys in Pennsylvania didn't know about it, so they wrecked a reactor,” Wald told 404 Media.
"AI is helpful, but let’s not get messianic about it.”
According to Wald, using AI to consolidate government databases full of nuclear regulatory information could have prevented that. “If you've got AI that can take data from one plant or from a set of plants, and it can arrange and organize that data in a way that's helpful to other plants, that's good news,” he said. “It could be good for safety. It could also just be good for efficiency. And certainly in licensing, it would be more efficient for both the licensee and the regulator if they had a clearer idea of precedent, of relevant other data.”He also said that the nuclear industry is full of safety-minded engineers who triple check everything. “One of the virtues of people in this business is they are challenging and inquisitive and they want to check things. Whether or not they use computers as a tool, they’re still challenging and inquisitive and want to check things,” he said. “And I think anybody who uses AI unquestionably is asking for trouble, and I think the industry knows that…AI is helpful, but let’s not get messianic about it.”
But Khlaaf and Guerra are worried that the framing of nuclear power as a national security concern and the embrace of AI to speed up construction will setback the embrace of nuclear power. If nuclear isn’t safe, it’s not worth doing. “People seem to have lost sight of why nuclear regulation and safety thresholds exist to begin with. And the reason why nuclear risks, or civilian nuclear risk, were ever justified, was due to the capacity for nuclear power. To provide flexible civilian energy demands at low cost emissions in line with climate targets,” Khlaaf said.
“So when you move away from that…and you pull in the AI arms race into this cost benefit justification for risk proportionality, it leads government to sort of over index on these unproven benefits of AI as a reason to have nuclear risk, which ultimately undermines the risks of ionizing radiation to the general population, and also the increased risk of nuclear proliferation, which happens if you were to use AI like large language models in the licensing process.”
Dem NRC members warn they could be fired over safety decisions - E&E News by POLITICO
A Wednesday hearing also revealed more details about a meeting with a Department of Government Efficiency official.Nico Portuondo, Francisco "A.J." Camacho (E&E News by POLITICO)
Newly released documents provide more details about ICE's plan to use bounty hunters and private investigators to find the location of undocumented immigrants.
Newly released documents provide more details about ICEx27;s plan to use bounty hunters and private investigators to find the location of undocumented immigrants.#ICE #bountyhunters
ICE Plans to Spend $180 Million on Bounty Hunters to Stalk Immigrants
Immigration and Customs Enforcement (ICE) is allocating as much as $180 million to pay bounty hunters and private investigators who verify the address and location of undocumented people ICE wishes to detain, including with physical surveillance, according to procurement records reviewed by 404 Media.The documents provide more details about ICE’s plan to enlist the private sector to find deportation targets. In October The Intercept reported on ICE’s intention to use bounty hunters or skip tracers—an industry that often works on insurance fraud or tries to find people who skipped bail. The new documents now put a clear dollar amount on the scheme to essentially use private investigators to find the locations of undocumented immigrants.
💡
Do you know anything else about this plan? Are you a private investigator or skip tracer who plans to do this work? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.This post is for subscribers only
Become a member to get access to all content
Subscribe nowICE Plans to Spend $180 Million on Bounty Hunters to Stalk Immigrants
Newly released documents provide more details about ICE's plan to use bounty hunters and private investigators to find the location of undocumented immigrants.Joseph Cox (404 Media)
A fight against a massive AI data center; how people are 3D-printing whistles to fight ICE; and AI's war on knowledge.
A fight against a massive AI data center; how people are 3D-printing whistles to fight ICE; and AIx27;s war on knowledge.#Podcast
Podcast: Inside a Small Town's Fight Against a $1.2 Billion AI Datacenter
We start with Matthew Gault’s dive into a battle between a small town and the construction of a massive datacenter for America’s nuclear weapon scientists. After the break, Joseph explains why people are 3D-printing whistles in Chicago. In the subscribers-only section, Jason zooms out and tells us what librarians are seeing with AI and tech, and how that is impacting their work and knowledge more broadly.
playlist.megaphone.fm?e=TBIEA3…
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
youtube.com/embed/rHk580uKwHw?…
6:03 - Our New FOIA Forum! 11/19, 1PM ET7:50 - A Small Town Is Fighting a $1.2 Billion AI Datacenter for America's Nuclear Weapon Scientists
12:27 - 'A Black Hole of Energy Use': Meta's Massive AI Data Center Is Stressing Out a Louisiana Community
21:09 - 'House of Dynamite' Is About the Zoom Call that Ends the World
30:35 - The Latest Defense Against ICE: 3D-Printed Whistles
SUBSCRIBER'S STORY: AI Is Supercharging the War on Libraries, Education, and Human Knowledge
The 404 Media Podcast
Tech News Podcast · Updated Weekly · Welcome to the podcast from 404 Media where Joseph, Sam, Emanuel, and Jason catch you up on the stories we published this week. 404 Media is a journalist-owned digital media company exploring the way …Apple Podcasts
Come learn how researchers and others learned what cops were using Flock's nationwide network of cameras for, including searches for ICE.
Come learn how researchers and others learned what cops were using Flockx27;s nationwide network of cameras for, including searches for ICE.#FOIA #FOIAForum
Our New FOIA Forum! 11/19, 1PM ET
It’s that time again! We’re planning our latest FOIA Forum, a live, hour-long or more interactive session where Joseph and Jason will teach you how to pry records from government agencies through public records requests. We’re planning this for Wednesday, November 19th at 1 PM Eastern. That's in just over a week away! Add it to your calendar!This time we're focused on our coverage of Flock, the automatic license plate reader (ALPR) and surveillance tech company. Earlier this year anonymous researchers had the great idea of asking agencies for the network audit which shows why cops were using these cameras. Following that, we did a bunch of coverage, including showing that local police were performing lookups for ICE in Flock's nationwide network of cameras, and that a cop in Texas searched the country for a woman who self-administered an abortion. We'll tell you how all of this came about, what other requests people did after, and what requests we're exploring at the moment with Flock.
If this will be your first FOIA Forum, don’t worry, we will do a quick primer on how to file requests (although if you do want to watch our previous FOIA Forums, the video archive is here). We really love talking directly to our community about something we are obsessed with (getting documents from governments) and showing other people how to do it too.
Paid subscribers can already find the link to join the livestream below. We'll also send out a reminder a day or so before. Not a subscriber yet? Sign up now here in time to join.
We've got a bunch of FOIAs that we need to file and are keen to hear from you all on what you want to see more of. Most of all, we want to teach you how to make your own too. Please consider coming along!
This post is for subscribers only
Become a member to get access to all content
Subscribe nowOur New FOIA Forum! 11/19, 1PM ET
Come learn how researchers and others learned what cops were using Flock's nationwide network of cameras for, including searches for ICE.Joseph Cox (404 Media)
Ypsilanti, Michigan has officially decided to fight against the construction of a 'high-performance computing facility' that would service a nuclear weapons laboratory 1,500 miles away.
Ypsilanti, Michigan has officially decided to fight against the construction of a x27;high-performance computing facilityx27; that would service a nuclear weapons laboratory 1,500 miles away.#News
A Small Town Is Fighting a $1.2 Billion AI Datacenter for America's Nuclear Weapon Scientists
Ypsilanti, Michigan resident KJ Pedri doesn’t want her town to be the site of a new $1.2 billion data center, a massive collaborative project between the University of Michigan and America’s nuclear weapons scientists at Los Alamos National Laboratories (LANL) in New Mexico.“My grandfather was a rocket scientist who worked on Trinity,” Pedri said at a recent Ypsilanti city council meeting, referring to the first successful detonation of a nuclear bomb. “He died a violent, lonely, alcoholic. So when I think about the jobs the data center will bring to our area, I think about the impact of introducing nuclear technology to the world and deploying it on civilians. And the impact that that had on my family, the impact on the health and well-being of my family from living next to a nuclear test site and the spiritual impact that it had on my family for generations. This project is furthering inhumanity, this project is furthering destruction, and we don’t need more nuclear weapons built by our citizens.”
playlist.megaphone.fm?p=TBIEA2…
At the Ypsilanti city council meeting where Pedri spoke, the town voted to officially fight against the construction of the data center. The University of Michigan says the project is not a data center, but a “high-performance computing facility” and it promises it won’t be used to “manufacture nuclear weapons.” The distinction and assertion are ringing hollow for Ypsilanti residents who oppose construction of the data center, have questions about what it would mean for the environment and the power grid, and want to know why a nuclear weapons lab 24 hours away by car wants to build an AI facility in their small town.“What I think galls me the most is that this major institution in our community, which has done numerous wonderful things, is making decisions with—as I can tell—no consideration for its host community and no consideration for its neighboring jurisdictions,” Ypsilanti councilman Patrick McLean said during a recent council meeting. “I think the process of siting this facility stinks.”
For others on the council, the fight is more personal.
“I’m a Japanese American with strong ties to my family in Japan and the existential threat of nuclear weapons is not lost on me, as my family has been directly impacted,” Amber Fellows, a Ypsilanti Township councilmember who led the charge in opposition to the data center, told 404 Media. “The thing that is most troubling about this is that the nuclear weapons that we, as Americans, witnessed 80 years ago are still being proliferated and modernized without question.”
It’s a classic David and Goliath story. On one side is Ypsilanti (called Ypsi by its residents), which has a population just north of 20,000 and situated about 40 minutes outside of Detroit. On the other is the University of Michigan and Los Alamos National Laboratories (LANL), American scientists famous for nuclear weapons and, lately, pushing the boundaries of AI.
The University of Michigan first announced the Los Alamos data center, what it called an “AI research facility,” last year. According to a press release from the university, the data center will cost $1.25 billion and take up between 220,000 to 240,000 square feet. “The university is currently assessing the viability of locating the facility in Ypsilanti Township,” the press release said.
Signs in an Ypsilanti yard.
On October 21, the Ypsilanti City Council considered a proposal to officially oppose the data center and the people of the area explained why they wanted it passed. One woman cited environmental and ethical concerns. “Third is the moral problem of having our city resources towards aiding the development of nuclear arms,” she said. “The city of Ypsilanti has a good track record of being on the right side of history and, more often than not, does the right thing. If this resolution passed, it would be a continuation of that tradition.”A man worried about what the facility would do to the physical health of citizens and talked about what happened in other communities where data centers were built. “People have poisoned air and poisoned water and are getting headaches from the generators,” he said. “There’s also reports around the country of energy bills skyrocketing when data centers come in. There’s also reports around the country of local grids becoming much less reliable when the data centers come in…we don’t need to see what it’s like to have a data center in Ypsi. We could just not do that.”
The resolution passed. “The Ypsilanti City Council strongly opposes the Los Alamos-University of Michigan data center due to its connections to nuclear weapons modernization and potential environmental harms and calls for a complete and permanent cessation of all efforts to build this data center in any form,” the resolution said.
Ypsi has a lot of reasons to be concerned. Data centers tend to bring rising power bills, horrible noise, and dwindling drinking water to every community they touch. “The fact that U of M is using Ypsilanti as a dumping ground, a sacrifice zone, is unacceptable,” Fellows said.
Ypsi’s resolution focused on a different angle though: nuclear weapons. “The Ypsilanti City Council strongly opposes the Los Alamos-University of Michigan data center due to its connections to nuclear weapons modernization and potential environmental harms and calls for a complete and permanent cessation of all efforts to build this data center in any form,” the resolution said.
As part of the resolution, Ypsilanti Township is applying to join the Mayors for Peace initiative, an international organization of cities opposed to nuclear weapons and founded by the former mayor of Hiroshima. Fellows learned about Mayors for Peace when she visited Hiroshima last year.
0:00
/1:46
1×This town has officially decided to fight against the construction of an AI data center that would service a nuclear weapons laboratory 1,500 miles away. Amber Fellows, a Ypsilanti Township councilmember, tells us why. Via 404 Media on Instagram
Both LANL and the University of Michigan have been vague about what the data center will be used for, but have said it will include one facility for classified federal research and another for non-classified research which students and faculty will have access to. “Applications include the discovery and design of new materials, calculations on climate preparedness and sustainability,” it said in an FAQ about the data center. “Industries such as mobility, national security, aerospace, life sciences and finance can benefit from advanced modeling and simulation capabilities.”
The university FAQ said that the data center will not be used to manufacture nuclear weapons. “Manufacturing” nuclear weapons specifically refers to their creation, something that’s hard to do and only occurs at a handful of specialized facilities across America. I asked both LANL and the University of Michigan if the data generated by the facility would be used in nuclear weapons science in any way. Neither answered the question.
“The federal facility is for research and high-performance computing,” the FAQ said. “It will focus on scientific computation to address various national challenges, including cybersecurity, nuclear and other emerging threats, biohazards, and clean energy solutions.”
LANL is going all in on AI. It partnered with OpenAI to use the company’s frontier models in research and recently announced a partnership with NVIDIA to build two new super computers named “Mission” and “Vision.” It’s true that LANL’s scientific output covers a range of issues but its overwhelming focus, and budget allocation, is nuclear weapons. LANL requested a budget of $5.79 billion in 2026. 84 percent of that is earmarked for nuclear weapons. Only $40 million of the LANL budget is set aside for “science,” according to government documents.
💡
Do you know anything else about this story? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.“The fact is we don’t really know because Los Alamos and U of M are unwilling to spell out exactly what’s going to happen,” Fellows said. When LANL declined to comment for this story, it told 404 Media to direct its question to the University of Michigan.
The university pointed 404 Media to the FAQ page about the project. “You'll see in the FAQs that the locations being considered are not within the city of Ypsilanti,” it said.
It’s an odd statement given that this is what’s in the FAQ: “The university is currently assessing the viability of locating the facility in Ypsilanti Township on the north side of Textile Road, directly across the street from the Ford Rawsonville Components plant and adjacent to the LG Energy Solutions plant.”
It’s true that this is not technically in the city of Ypsilanti but rather Ypsilanti Township, a collection of communities that almost entirely surrounds the city itself. For Fellows, it’s a distinction without a difference. “[Univeristy of Michigan] can build it in Barton Hills and see how the city of Ann Arbor feels about it,” she said, referencing a village that borders the township where the university's home city of Ann Arbor.
“The university has, and will continue to, explore other sites if they are viable in the timeframe needed for successful completion of the project,” Kay Jarvis, the university’s director of public affairs, told 404 Media.
Fellows said that Ypsilanti will fight the data center with everything it has. “We’re putting pressure on the Ypsi township board to use whatever tools they have to deny permits…and to stand up for their community,” she said. “We’re also putting pressure on the U of M board of trustees, the county, our state legislature that approved these projects and funded them with public funds. We’re identifying all the different entities that have made this project possible so far and putting pressure on them to reverse action.”
For Fellows, the fight is existential. It’s not just about the environmental concerns around the construction project. “I was under the belief that the prevailing consensus was that nuclear weapons are wrong and they should be drawn down as fast as possible. I’m trying to use what little power I have to work towards that goal,” she said.
Planned Nuclear Weapons Activities Increase to 84% of Lab’s Budget; All Other Programs Cut - NukeWatch NM
The Department of Energy and Los Alamos National Laboratory have released the LANL congressional budget request for the upcoming fiscal year, 2026, which begins on October 1, 2025.Sophia Meryn (Nuclear Watch New Mexico)