The 'Freedom Trucks' will haul AI slop George Washington on a tour across 48 American states.

The x27;Freedom Trucksx27; will haul AI slop George Washington on a tour across 48 American states.#News #AI


I Visited the ‘Freedom Truck’ to Meet PragerU’s AI Slop Founders


In the parking lot of Seven Oaks Element school in South Carolina on one of the first hot days of the year I watched an AI-generated George Washington talk about the American revolution. “Our rights are a gift from God, not a favor from kings or courts,” slop Washington told me. It spoke from a screen that stretched floor to ceiling, trimmed by a fancy frame.

The intended effect is to make it appear as if the founding father is a painting come to life, a piece of history talking to the viewer. The actual effect was to remind that the AI slop aesthetic is synonymous with the Trump presidency and has become part of the visual language of fascism. Which is fitting because AI George Washington is the result of a collaboration between the Trump White and online content mill PragerU.
playlist.megaphone.fm?p=TBIEA2…
The AI slop founding father is part of a touring exhibit of Freedom Trucks commissioned by PragerU in honor of the 250th anniversary of American independence. The trucks are a mobile museum exhibit meant to teach kids about the founding of the country. It’s pitched at kids—most of the “content,” as staff on site called it, is meant for a younger audience but the trucks have viewing hours open to the general public. Nick Bravo, a PragerU employee on hand to answer questions, told me that there are six Freedom Trucks and that the plan is to have them travel the 48 contiguous United States over the next year.

I was drawn to the Freedom Truck because I’d heard they contained AI-generated recreations of Revolutionary figures like Washington, Betsy Ross, and the Marquis Lafayette, similar to the ones on display at the White House. To my disappointment, the AI generated videos in the Freedom Truck are remarkably boring.

As I watched the AI George Washington deliver a by-the-books version of the American story, I thought about Jerry Jones. The famously vain owner of the Dallas Cowboys commissioned an AI version of himself for AT&T stadium in 2023. Fans who make the pilgrimage to the stadium can watch a presentation and ask the AI Jones questions. The AI wanders a big screen while it talks to the audience.

Other than the lazy AI generated videos, the Freedom Truck doesn’t have much to offer. I signed a digital copy of the Declaration of Independence on a touchscreen and took a quiz that asked leading questions designed to find out if I was a “loyalist or patriot.”

“The British Army sends soldiers to Boston. How do you react?” Answer 1: “View them as occupiers violating colonial liberty.” Answer 2: “Welcome them as defenders of law and order.” With ICE and the National Guard patrolling American cities, I wondered how supporters of the current administration would answer that one.

PragerU is known for its “America can do no wrong” view of US history. Its short form video content offers a cartoon version of the past stripped of nuance and context where the country lives up to the myth that it is a “Shining City On a Hill.” According to PragerU, white people abolished slavery and dropping the atomic bomb on Japan was a necessary thing that “shortened the war and saved countless lives.” Now PragerU is taking its view of history on tour across the country. School children in every state will wander these trucks and encounter an AI slop version of the past.

Bravo told me that all the truck’s content was generated as part of a partnership between PragerU and Michigan’s Hillsdale College—a Christian university that helped craft Project 2025. There were, of course, hints of Project 2025 around the edges of the child-friendly AI-generated videos. Slavery isn’t ignored but the stories of early African Americans like poet Phillis Wheatley focus on her celebration of America rather than how she arrived there. On the museum’s “Wall of Heroes,” Whittaker Chambers is nestled between architect Frank Lloyd Wright and painter Norman Rockwell.

A small note near the floor at the exit of the truck notes the collaboration of PragerU and Hillsdale College, and claims that “neither institution received any federal funds and both generously contributed their own resources to help create this educational exhibit.” It also said “this truck was made possible through a grant from the Institute of Museum and Library Services,” which is, of course, a federal agency.

Every AI-generated video ended with a title card showing the White House and PragerU’s logo. “The White House is grateful for the partnership with PragerU and the US Department of Education for the production of this museum,” the card said. “This partnership does not constitute or imply a US Government or US Department of Education endorsement of PragerU.”

Trump attempted to dismantle the Institute of Museum and Library Services (IMLS) via executive order in 2025, but the courts blocked it. Libraries and Museums have since reported that the IMLS grant process has taken on a “chilling” pro-Trump political turn. The administration has also attempted to dismantle the Department of Education.

Trump’s voice was the last thing I heard as I wandered into the bright afternoon sun. “I want to thank PragerU for helping us share this incredible story,” he said in a recorded video that played on a loop in Freedom Truck. “I hope you will join me in helping to make America’s 250th anniversary a year we will never forget.”


#ai #News #x27

In a new series by CBC Podcasts, hosted by 404 Media's Sam Cole, join journalists, investigators, and targets of non-consensual intimate images on the hunt for the worlds’ most prolific deepfake mastermind.

In a new series by CBC Podcasts, hosted by 404 Mediax27;s Sam Cole, join journalists, investigators, and targets of non-consensual intimate images on the hunt for the worlds’ most prolific deepfake mastermind.#Podcast #podcasts #cbc #Deepfakes


New Podcast Alert: The Globe-Spanning, Multi-Newsroom Hunt for Mr. Deepfakes


Mr. Deepfakes was the biggest website in the world for sharing AI-generated abuse imagery, swapping tips and tricks for more realistic results, and posting endless, fake, nonconsensual videos of everyone from celebrities to everyday people. In a new podcast by the CBC, I got to tell the tale of how deepfakes started, what targets go through, and where we go next.

It's called Understood: Deepfake Porn Empire. It's about the decades-long rise of non-consensual deepfake porn, the targets who are fighting back, and what it takes to stop its proliferation. Check it out here and listen wherever you get your podcasts.

The first three episodes are already up, so you can binge them all before the finale next Tuesday.

View this post on Instagram


A post shared by 404 Media (@404mediaco)


In the first episode, "The Dawn of Fake Porn," you’ll get a fascinating history of the decades of cultural and technological standards that set the stage for AI-generated nonconsensual imagery as we know it today. I learned a lot in this episode myself, including about a guy who went by “Lux Lucre” who ran two Usenet groups dedicated to fake nudes of celebrities in the 90s. This stuff goes so much farther back than you might realize.

In episode two, “So You’ve Been Deepfaked,” I got the chance to talk to Taylor, who discovered she’d been targeted by AI images while at university, working in a male-dominated field. Instead of hoping it’d go away, she set out to find her harasser, and found his other targets in the process. It all led back to one place: the biggest deepfake site in the world, Mr. Deepfakes.

Episode three just came out today: “The Notorious D.P.F.K.S.” is a romp through the investigative highs and lows that led a team of journalists scattered around the world to the door of Mr. Deepfakes himself. I was so thrilled to talk to investigative journalist Ida Herskind, OSINT specialist Zakaria Hameed, and Bellingcat’s Ross Higgins in this episode. Come for the How I Met Your Mother references, stay for the gripping chase.

Episode four, the series finale, launches next week. It’s a true crime story with CBC reporters on stakeouts and infiltrating hospitals, and legal and social experts breaking down what it all means now that we’re in a post-Mr. Deepfakes world—but far from a post-AI abuse landscape. Follow the Understood feed wherever you listen to get it when it comes out on Tuesday.

If you liked this season, head back to catch up on another series I hosted with the CBC: Pornhub Empire, on the rise and fall of the porn monolith.

Tune in and let me know what you think!


Some AWS services are down in the Middle East. Recovery is unclear as it requires 'careful assessment to ensure the safety of our operators,' according to Amazon.

Some AWS services are down in the Middle East. Recovery is unclear as it requires x27;careful assessment to ensure the safety of our operators,x27; according to Amazon.#News #war


Amazon Data Centers on Fire After Iranian Missile Strikes on Dubai


Amazon’s cloud services are down in some of the Middle East after “objects” hit data centers in the United Arab Emirates (UAE) causing “sparks and fire.” Around 60 services tied to AWS are down in the region, affecting web traffic in the UAE and Bahrain. The outage comes following Iranian attacks on the UAE as retaliation for US and Israeli strikes on Iran.

Customers in Bahrain and the UAE began to report outages tied to the mec1-az2 and mec1-az3 clusters in AWS’ ME-CENTRAL-1 Region on March 1 after Iranian ballistic missiles and drones struck targets in and around Dubai. Amazon did not confirm that AWS was down in the Middle East due to an Iranian attack and instead referred 404 Media to its online dashboard.
playlist.megaphone.fm?p=TBIEA2…
“At around 4:30 AM PST, one of our Availability Zones (mec1-az2) was impacted by objects that struck the data center, creating sparks and fire,” AWS said on its health dashboard. “The fire department shut off power to the facility and generators as they worked to put out the fire. We are still awaiting permission to turn the power back on, and once we have, we will ensure we restore power and connectivity safely. It will take several hours to restore connectivity to the impacted AZ.”

As of this morning at 9:22 AM ET, the damage had spread. “We are expecting recovery to take at least a day, as it requires repair of facilities, cooling and power systems, coordination with local authorities, and careful assessment to ensure the safety of our operators,” AWS said. “We recommend customers enact their disaster recovery plans and recover from remote backups into alternate AWS Regions, ideally in Europe.”

Amazon later shared more information about the attack and confirmed it was the result of drones. “Due to the ongoing conflict in the Middle East, both affected regions have experienced physical impacts to infrastructure as a result of drone strikes. In the UAE, two of our facilities were directly struck, while in Bahrain, a drone strike in close proximity to one of our facilities caused physical impacts to our infrastructure,” it said. “These strikes have caused structural damage, disrupted power delivery to our infrastructure, and in some cases required fire suppression activities that resulted in additional water damage. We are working closely with local authorities and prioritizing the safety of our personnel throughout our recovery efforts.”

On Saturday, the United States and Israel launched Operation Epic Fury and struck targets inside of Iran, killing several political and military leaders including Ayatollah Ali Khamenei, the country’s Supreme Leader. In retaliation, Iran launched drone and missile attacks against Israel and multiple US-allied targets in the Middle East.

According to the Emirati defense forces, Iran attacked the country with two cruise missiles, 165 ballistic missiles, and more than 540 drones. The UAE and its capital city Dubai are often seen as a safe and stable destination in the Middle East. The country hosts wealthy people from across the region and influencers from across the world. Footage shared on social media showed the neon towers of the UAE backlit by missiles and munitions.

It’s unclear how long it will take for Amazon to restore services to the region or how far the damage will spread. Amazon’s dashboard is promising to bring things back up in “at least a day” but the war is far from over. Iran continues to strike targets in the Middle East and it’s unclear what America’s plan of attack is or how long this war might grind on.

Update 2/2/26: This story has been updated with more specifics about the attack from Amazon.


#News #war #x27

“We just want to take down posts about people who are being defamed," the company's founder said. “And when I say defamed, it means like, ‘this guy has a small penis,’ or ‘this guy smells.’"

“We just want to take down posts about people who are being defamed," the companyx27;s founder said. “And when I say defamed, it means like, ‘this guy has a small penis,’ or ‘this guy smells.’"#News #tea


Company Helps Men Scrub Negative Posts About Them from Tea App


Tea App Green Flags, a service that claims it can “protect your digital reputation,” will remove negative posts about men from private online groups where women share “red flags” about men they’ve dated in order to help other women.

The service is another escalation in the age of online dating, women attempting to protect each other from other men in the dating pool, and instances of men fighting against those efforts. It also shows how some of these allegedly private women’s groups, especially the Tea app, are regularly infiltrated and manipulated by men.

When I reached out to an email listed on Tea App Green Flags’s site, I got a call from a man behind the operation who identified only as Jay. He said he started the service about two years ago, and that he initially focused on the Are We Dating the Same Guy Facebook groups. For the past year, he’s been offering services specifically for the Tea app, a “dating safety” app for women that suffered a devastating breach last year, and which my investigation revealed, was founded by a man who wanted to monetize the Are We Dating the Same guy phenomenon. The site also claims it can remove posts from Tea app copycat for men TeaOnHer, as well as posts on Instagram.

Jay declined to say how much revenue the site generates, but claims he gets about 50 to 60 calls a day and currently has six employees. On its website, Tea App Green Flags claims it has removed more than 2,500 posts on the Tea app for 759 clients. Jay said that most of his clients are men, but that some are women who are trying to take down posts about their husbands or boyfriends.

Potential clients can pay $1.99 to report one account and up to $79.99 to report 25 accounts.

“We just want to take down posts about people who are being defamed,” Jay told me. “And when I say defamed, it means like, ‘this guy has a small penis,’ or ‘this guy smells.’ That doesn't fit the mission statement of what the Tea app was for, which is to warn women against people who are harmful, who are abusive, who are cheaters. We've noticed that a lot of the individuals that come to us, almost all of them, come to us for little stupid things.”

Clients interested in Tea App Green Flags’s services go to the site and fill out a form with their information and information about the posts they want removed. The company reviews the case and then starts the “takedown process,” which can take between 21-30 days. Tea App Green Flags says it will then continue to monitor posts about the client and remove them for three months.

💡
Were you impacted by the Tea hack? I would love to hear from you. Using a non-work device, you can message me securely on Signal at ‪@emanuel.404‬. Otherwise, send me an email at emanuel@404media.co.

When I asked Jay how this “takedown process” works he said “I can’t give that info. That’s the business.”

Jay told me that he would not work with clients who have been accused of sexual assault by multiple people on the Tea app, or by one person in one of the Are We Dating the Same Guy Facebook groups who used their real name and face in a profile picture.

“Sometimes we find along the process that there are pedophiles or people who actually did what they did, and they're very bad,” Jay said. “So we say, we're not doing this. We can't take a rap for that. We're ethical. We just want to take down people who are being defamed.”

Jay told me he understands why Facebook groups like Are We Dating the Same Guy are necessary and thinks they are a good idea, but the anonymous nature of the Tea app "causes a cesspool of defamation.”

When I asked Jay what he thinks about the fact that some women don’t feel safe sharing information about some dangerous men unless they can do so anonymously, he said it would be better if women showed their face, or if the Tea app at least gave women that option.

“I have a Tea app account. I'm a dude. All my reps have Tea app accounts. They're men,” Jay said. “How much can you trust these people and what they're doing?”

One reason the Tea app hack was so dangerous is because the app used to ask women to upload a picture of their face in order to verify that they are women. Those images were posted all over the internet because of the hack, putting those women at risk and leading to more harassment.

Tea App Green Flags is far from the first attempt from men trying to fight back against these types of groups. In 2024, for example, we wrote about a man who tried to sue women who posted about him in Are We Dating the Same Guy Facebook groups. His first case was dismissed, and he refiled days later as a class action lawsuit; later that year, he was sent to prison for tax fraud.

Tea did not immediately respond to a request for comment.


#News #tea #x27

The group is talking about Epstein and filming propaganda videos in Roblox as a form of 'digital Jihad,' researchers say.

The group is talking about Epstein and filming propaganda videos in Roblox as a form of x27;digital Jihad,x27; researchers say.#News


The Islamic State Is Using AI to Resurrect Dead Leaders and Platforms Are Failing to Moderate It


The Islamic State’s online warriors are still posting. It’s been almost a decade since the group lost the Battle of Raqqa and saw its IRL territorial ambitions thwarted. Unable to hold territory in the real world, the group renewed its focus on posting and has started using AI to resurrect dead leaders. And, because social media platforms have gutted their content moderation operations, the terror group’s strategy is working.

The Islamic State’s online success is detailed in a new report from the Institute for Strategic Dialogue (ISD), an independent research institution that studies extremist movements. For the study, researchers tracked IS accounts on Facebook, TikTok, Instagram, WhatsApp, Telegram, Element, and SimpleX. It found videos posted in Discord channels dedicated to video games and tracked how the groups have modified old content to fit on new platforms.
playlist.megaphone.fm?p=TBIEA2…
Like many others posting online in 2026, the Islamic State has found success by talking about the Epstein Files, using AI to create new videos of dead leaders, and has begun taking its message to video games like Minecraft and Roblox.

“They are very adept at exploiting platforms [and] spreading messages,” Moustafa Ayad, a researcher at ISD and author of this study, told 404 Media. He noted that the group has been active online for 10 years and that part of their success is a willingness to experiment.

Ayad said that Facebook remains a central hub for IS, despite its push into new spaces. His research discovered 350 IS accounts on Facebook that generated tens of thousands of views. One video of an IS fighter talking to camera had more than 77,000 views and 101 shares. The Islamic State branding is blurred to defeat the site’s auto-moderation.

According to Ayad, Islamic State’s engagement numbers are up across the board. “Trust and safety teams have been rolled back over the past few years…a lot of this is outsourced to third party companies who aren’t necessarily experts in understanding if a piece of content came from the Islamic State,” he said.

Social media companies like Meta used the election of Donald Trump as an excuse to cut back on moderating their platforms. Meta said this would mean “more speech and fewer mistakes.” No policies around terrorism have changed, but broadly speaking the largest social media platforms are doing a worse job at moderating their sites. In practice it’s turned Facebook into a place where a group like the Islamic State can spread its message without falling afoul of content moderation teams. Even three years ago, IS influencers wouldn’t have lasted long on the site.

This rollback of moderation has coincided with a spike in views for IS accounts, the report argues. “Individual IS ‘influencer’ accounts are experiencing higher engagement rates on terrorist content than previously recorded by ISD analysts,” the report said. “It is unclear if this uptick is due to moderation gaps, platform mechanics or specific tactical adjustments by IS supporters and support outlets and groups.”

“We’re not talking about content where there’s a gray area,” Ayad said. “It’s very clearly branded Islamic State…supports violence, supports the killing of minorities, the celebration of bombings, the pillaging that is happening in Sub Saharan Africa.”

Something new is the adoption of AI systems to resurrect dead leaders. Ayad described a video where the deceased IS leader Abu Bakr al-Baghdadi delivered speeches again.

“It’s a sanctioned version of using AI for a ‘beloved leader’ or taking him out of context and placing him in a meadow, surrounded by beautiful flowers, paying homage,” he said. “Some of these circles are strange.”

Another popular topic in current IS propaganda is the Epstein Files. According to Ayad, an AI-generated photo of Donald Trump and Bill Clinton canoodling in bed makes frequent appearances on IS accounts across platforms. The picture is, supposedly, pulled from the Epstein files but it’s a popular fake. Ayad said Epstein has been a perfect springboard for IS to talk about “western degeneracy.”

Ayad has also seen Islamic State videos created using Minecraft and Roblox. “They’re creating these virtual worlds that mimic the Islamic State’s caliphate, literally calling it something like Wilayat Roblox [the Province of Roblox] … and they’ll completely mimic the video styles of well-known Islamic State Videos using Roblox characters. This includes faux executions. It includes Arabic and English voiceover in the same cadence as an Islamic State narrator.”

One of the most famous pieces of Islamic State propaganda is a film called Flames of War: The Fighting Has Just Begun. Ayad has seen multiple 1 for 1 recreations of the film using Roblox characters. “They’re often tied to Discords where a number of users are creating this content. They always claim it’s fake or a LARP,” he said. “To see them in this video game skin is odd, to say the least.”

What drives an Islamic State poster? “It’s done very much for the love of the game,” Ayad said. It’s done for the fact that, as a user, ‘I might not be able to participate in physical Jihad but I can participate in electronic Jihad.’”

Keeping Islamic State off of major social media platforms is a constant battle, but one frustrating finding of the study is that the tactics for avoiding moderation haven’t changed much.

“Techniques included the use of alternative news outlets to rebrand IS news, as well as purchasing or hijacking channels with high subscriber bases. These were then repurposed to share IS content. IS supporters, groups and outlets also use coded language: they sometimes referred to the group as ‘black hole’ or the ‘righteous few’ to confound moderation efforts.”

To fight back against IS online, Ayad said that platforms needed to be better at coordination. Often a group is kicked off of Facebook so it moves to TikTok or another platform where it flourishes. He also said that all the companies need to be more transparent about who they’re kicking off their platform and why.

“Europol does these big takedown days and they’re effective to a certain degree but the fact of the matter is that the Islamic State is spread across an expanse of different platforms and messaging applications,” he said. “They’re able to shift operations to another place, wait it out and regenerate on that platform…it’s not like you’re dealing with an average user, you’re dealing with a user that’s determined to spread their ideology and exploit your platform to their own ends.”

And then there’s the old problem of language. “There needs to just be better moderation of under-moderated languages,” Ayad said. Facebook and other platforms have long been terrible at moderating non-English languages. A lot of rancid content online gets a pass because it’s in Arabic or Bengali.


#News #x27

The media in this post is not displayed to visitors. To view it, please log in.

In the latest in a string of privacy abuses from the chatbot, Grok provided porn performer Siri Dahl's full legal name and birthdate to the public, information she'd protected until now.

In the latest in a string of privacy abuses from the chatbot, Grok provided porn performer Siri Dahlx27;s full legal name and birthdate to the public, information shex27;d protected until now.#grok #xai #x #AI #chatbots


Grok Exposed a Porn Performer’s Legal Name and Birthdate—Without Even Being Asked


Porn performer Siri Dahl’s personal information, including her full legal name and birthday, was publicly exposed earlier this month by xAI’s Grok chatbot. Almost instantly, harassers started opening Facebook accounts in her name and posting stolen porn clips with her real name on sites for leaking OnlyFans content.

Dahl has used the name — a nod to her Scandinavian heritage — since the beginning of her career in the adult industry in 2012. Now, Grok is revealing her legal name and all personal information it can find to whoever happens to ask.

Dahl told 404 Media she wanted to reclaim the situation, and her name, and asked that it be published in this piece as part of that goal.

Dahl first noticed this happening last week, she told 404 Media, after a clip of the performer from a porn scene was making its rounds on X. The scene was incorrectly labelled, so someone on X replied, “Who is she? What is her name?” and tagged @[url=https://bird.makeup/users/grok]Grok[/url] to get an answer.

Grok answered, “she appears to be Siri Dahl, an American adult film actress born on June 20, 1988. Her real name is Adrienne Esther Manlove.” Grok provided her personal information unprompted; the user likely only wanted information on what performer appeared in the clip.

This is the latest in a series of abuses inflicted by Grok, xAI, and its users. At the end of 2025, people used Grok to produce thousands of images of nonconsensual sexual content, including images depicting children. The problem was so widespread that the UK’s Ofcom and several attorneys general launched or demanded investigations into X and Grok, and police raided X’s offices in France as part of an investigation into child sexual abuse material on the platform.

X strictly prohibits sharing other people’s personal information without their consent. “Sharing someone’s private information online without their permission, sometimes called ‘doxxing,’ is a breach of their privacy and can pose serious safety and security risks for those affected,” the platform’s terms of use state. But X’s own chatbot is doing it anyway.
Screenshot via X
While there have been some close calls, up until now Dahl had managed to keep her personal information private. “I've been paying for data removal services for like, at least six years now,” Dahl said. She said she’s spent “easily” thousands of dollars on those services, which promise to delete personal and potentially dangerous information as it comes up.

Grok is trained on X users’ posts, as well as data scraped from the wider internet. X’s website says “Grok was pre-trained by xAI on a variety of data from publicly available sources and data sets reviewed and curated by AI Tutors who are human reviewers.” Dahl said she doesn’t know where Grok originally got her legal name from. But now that it’s part of the system’s internal dataset, she feels like there’s no coming back; her days of pseudonymity are over.

‘The Most Dejected I’ve Ever Felt:’ Harassers Made Nude AI Images of Her, Then Started an OnlyFans
Kylie Brewer isn’t unaccustomed to harassment online. But when people started using Grok-generated nudes of her on an OnlyFans account, it reached another level.
404 MediaSamantha Cole


“Now that it's been crawled, it's everywhere. There are a ton of Facebook accounts that come up that are pretending to be me, using my real name,” Dahl said. “There are now porn leak sites that are posting porn of me using only my legal name, not even putting my stage name on it.”

Users are now asking Grok for the make and model of Dahl’s car, her address, and other dangerous personal information. While it hasn’t been able to accurately reply yet, she worries it’s only a matter of time.

But Dahl isn’t the only person affected by the fallout.

“I do everything that I can reasonably within my power to keep my legal name private, and my main motivation for doing that is to reduce any chance of my family getting harassed,” she said. “It's really common for people to look up private information, get parents' phone numbers and start calling and harassing the parents, things like that. I've been able to keep my family safe from that kind of thing for years.”

Now, Dahl is having to call her family and put defensive plans in place.

In violating Dahl’s right to privacy, X’s Grok has destroyed Dahl’s ability to protect herself and her family online. Doxing her is not providing value to X users, as is ostensibly Grok’s goal. The original inquiry only wanted to know how to find more of her work, to which her stage name was the most useful answer.

“What would the motivation be for anyone to want to know my personal information, other than to harass and cause harm?” Dahl said.

In this ongoing discussion on “internet safety,” it is important to pay attention to who is being protected. Certainly not the users; the marginalized workers, or the young women. Not Dahl, or her family.

While the right to privacy online continues to be debated, it’s important to remember that privacy exists not only for bad-actors and shady characters. Historically, marginalized populations benefit from internet anonymity the most.

X did not respond to a request for comment.


Ring's CEO told staff the feature is “first for finding dogs,” indicating a plan to expand.

Ringx27;s CEO told staff the feature is “first for finding dogs,” indicating a plan to expand.#Ring


Leaked Email Suggests Ring Plans to Expand ‘Search Party’ Surveillance Beyond Dogs


Ring’s controversial, AI-powered “Search Party” feature isn’t intended to always be limited only to dogs, the company’s founder, Jamie Siminoff, told Ring employees in an internal email obtained by 404 Media.

In October, Ring launched Search Party, an on-by-default feature that links together Ring cameras in a neighborhood and uses AI to search for specific lost dogs, essentially creating a networked, automated surveillance system. The feature got some attention at the time, but faced extreme backlash after Ring and Siminoff promoted Search Party during a Super Bowl ad. 404 Media obtained an email that Siminoff sent to all Ring employees in early October, soon after the feature’s launch, which said the feature was introduced “first for finding dogs,” but that it or features like it would be expanded to “zero out crime in neighborhoods.”

“This is by far the most innovation that we have launched in the history of Ring. And it is not only the quantity, but quality,” Siminoff wrote. “I believe that the foundation we created with Search Party, first for finding dogs, will end up becoming one of the most important pieces of tech and innovation to truly unlock the impact of our mission. You can now see a future where we are able to zero out crime in neighborhoods. So many things to do to get there but for the first time ever we have the chance to fully complete what we started.”

“It is exciting to be back to Day 1, we are going to have to work hard and leverage everything we can, especially AI,” he continued. “Thanks again to everyone who came together to make this week happen and I can’t wait to show everyone else all the exciting things we are building over the years to come!”
youtube.com/embed/OheUzrXsKrY?…
As we wrote last week, Siminoff made Ring popular by signing partnership deals with police departments around the country. The company briefly stepped away from those partnerships after Siminoff left the company in 2023, but when he returned last year, he immediately refocused on Ring’s potential role in law enforcement. After the Super Bowl commercial, the company’s Search Party feature was criticized as dystopian and demonstrating functionality that could be easily expanded beyond looking for lost dogs. Although it doesn’t say what Search Party may specifically expand into, Siminoff’s email noting that the feature is “first for finding dogs” suggests the plan is to use Ring to scan for other things. In recent weeks, Ring has also launched a feature called “Familiar Faces,” which uses facial recognition to identify specific friends and family members on a person’s camera. The company also released “Fire Watch,” which uses AI to warn users about fires.

💡
Do you know anything else about Ring? I would love to hear from you. Using a non-work device, you can message me securely on Signal at jason.404. Otherwise, send me an email at jason@404media.co.

404 Media also obtained two earlier emails Siminoff sent to all Ring employees, about how Ring could have potentially been used to help find Charlie Kirk’s killer, and about the company’s “Community Requests” feature. Ring launched that feature in September and it allows police to ask Ring camera owners for footage about a specific incident. Community Requests is a feature that leverages the company’s partnership with the police tech company Axon. Ring had a similar planned partnership with surveillance company Flock, but the two companies canceled that partnership following widespread criticism.
youtube.com/embed/0JK-VSrtlWw?…
“Community requests are a foundational piece of what we do here towards our mission of making neighborhoods safer. I’m excited to see our to see [sic] the results of our public agencies using this tool and the impact it will have on our communities,” Siminoff wrote on September 4. “Also, if in your perusing of social media and other sites, you see something that you feel is not correctly, or even intentionally miss-representing [sic] the community request feature please ping me with a link so we can respond.”

Siminoff replied all to his own email the day after Charlie Kirk was assassinated: “Yesterday was a very sad day. I was really just sad on so many levels,” he wrote. Siminoff sent employees this Instagram Reel about the Kirk investigation, then said “it just shows how important the community request tool will be as we fully roll it out. It is so important to create the conduit for public service agencies to efficiently work with our neighbors. Time and information matters in these situations and I am proud that we are working to build the systems to help make our neighborhoods safer.”

In an emailed statement, a Ring spokesperson said “We’re focused on giving camera owners meaningful context about critical events in their neighborhoods—like a lost pet or nearby fire—so they can decide whether and how to help their community. For example, Search Party helps camera owners identify potential lost dogs using detection technology built specifically for that purpose; it does not process human biometrics or track people. Fire Watch alerts owners to nearby fire activity. Community Requests notify neighbors when local public safety agencies ask the community for assistance. Across these features, sharing has always been the camera owner’s choice. Ring provides relevant context about when sharing may be helpful—but the decision remains firmly in the customer’s hands, not ours.”


#ring #x27

Leaked documents reveal the inner workings of Alpha School, which both the press and the Trump administration have applauded. The documents show Alpha School's AI is generating faulty lessons that sometimes do "more harm than good."

Leaked documents reveal the inner workings of Alpha School, which both the press and the Trump administration have applauded. The documents show Alpha Schoolx27;s AI is generating faulty lessons that sometimes do "more harm than good."#News #AI #education

The site, camgirlfinder, is explicitly built as a tool to let people find a model's presence on other streaming platforms. The creator says “If that is a problem for you then the sad reality is this job is not for you.”

The site, camgirlfinder, is explicitly built as a tool to let people find a modelx27;s presence on other streaming platforms. The creator says “If that is a problem for you then the sad reality is this job is not for you.”#Privacy #News


Underground Facial Recognition Tool Unmasks Camgirls


An underground site uses facial recognition to reveal the site a camgirl streams on, potentially letting someone take a woman’s photo from social media, then use the site to out their sex work.

The site presents a serious privacy risk to sex workers, some who may not want stalkers, harassers, or employers to discover their profiles. The site’s creator claimed to 404 Media that millions of searches are done each month on the site.

“The site was created to help users find the models they like. For example, if they saw a random video or image on the internet without attribution,” the creator, who did not provide their name, said in an email. “Or just to see on which other platforms a model is active.”

Camgirlfinder has been running for several years, with most adult streaming platforms being added in 2021, the site says. It claims to have a database of 2,187,453,798 faces from 7,050,272 persons. The site says the database it uses contains faces from a wide variety of adult streaming platforms, including Chaturbate, MyFreeCams, and LiveJasmin. Of course, sex workers often have multiple accounts on multiple sites.

💡
Do you know anything else about this site or others like it? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

404 Media tested the service by uploading a photo of a camgirl who streams publicly. The site then successfully found her other profiles on other streaming platforms.

The results page shows other similar faces the site detected. The results include the model’s username on the streaming platform; the probability of the face match; and the last time their account was online. “Additionally you can see the most similar persons for each individual person of this model account. This is a great way to find all other accounts of a model,” the site says.

Users can also search the database of models by their username or a term similar to it. The database appears to include sex workers who may not have streamed for years, creating the risk that someone may use the site to find them even if they decided to not stream anymore. The site then sells all images it has of a particular person for $1 per model.
playlist.megaphone.fm?p=TBIEA2…
Asked about how this site impacts camgirls’ privacy, and how someone could take a photo from social media then unmask a person’s channels, the creator said, “If that is a problem for you then the sad reality is this job is not for you. If you publicly stream your face for everyone to see to the internet, people will obviously see it.”

“One consequence of this job is you can not publish images of yourself on your private social media accounts, if you want to keep them private (just for friends and family). This is similar to actors, politicians, youtubers or other public figures. If you stream content to the public internet you become a public figure yourself,” they said.

The site says models can opt-out from their results appearing if they fill out a form. The creator claimed to 404 Media that around 25,000 accounts have opted-out, with most models having multiple accounts across different platforms. “Yes, their images get deleted,” they claim.

The creator told 404 Media the site uses AdaFace, an open source face matching algorithm.

Over the last several years, facial recognition technology has morphed from a government surveillance tool, to one that members of the public use regularly against one another. In 2023, we covered a TikTok account that was using off-the-shelf facial recognition tech to dox random people on the internet for the amusement of millions of viewers. The following year, we reported two students had taken facial recognition software and paired it with Meta’s RayBan smart glasses, letting them dox people in seconds.

While government agencies, including ICE, continue to use facial recognition too, some people have used that technology to monitor those agencies instead. Last year, artist Kyle McDonald launched FuckLAPD.com, a site that uses public records and facial recognition technology to allow anyone to identify police officers.


The tool presents users with a 3D model they can then manipulate to, the creator says, bypass Discord's age verification system.

The tool presents users with a 3D model they can then manipulate to, the creator says, bypass Discordx27;s age verification system.#Privacy #News


Free Tool Says it Can Bypass Discord's Age Verification Check With a 3D Model


A newly released tool claims it can bypass Discord’s age verification system by allowing users to control a 3D model of a computer-generated man in their browser instead of scanning their real face.

On Monday, Discord announced it was launching teen-by-default settings globally, meaning that more users may be required to verify their age by uploading an identity document or taking a selfie. Users responded with widespread criticism, with Discord then publishing an update saying, “You need to be an adult to access age-restricted experiences such as age-restricted servers and channels or to modify certain safety settings.”

The tool, however, shows those age verification checks may be bypassed. 404 Media previously reported kids said they were using photos of Trump and G-Man from Half Life to bypass the age verification software in the popular VR game Gorilla Tag. That game uses the service k–ID, which is the same as what Discord is using.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


Kylie Brewer isn't unaccustomed to harassment online. But when people started using Grok-generated nudes of her on an OnlyFans account, it reached another level.

Kylie Brewer isnx27;t unaccustomed to harassment online. But when people started using Grok-generated nudes of her on an OnlyFans account, it reached another level.#AI #grok #Deepfakes

Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn't ready to take on the role of the physician.”

Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isnx27;t ready to take on the role of the physician.”#chatbots #AI #medicine

Lockdown Mode is a sometimes overlooked feature of Apple devices that broadly make them harder to hack. A court record indicates the feature might be effective at stopping third parties unlocking someone's device. At least for now.

Lockdown Mode is a sometimes overlooked feature of Apple devices that broadly make them harder to hack. A court record indicates the feature might be effective at stopping third parties unlocking someonex27;s device. At least for now.#Privacy #News


FBI Couldn’t Get into WaPo Reporter’s iPhone Because It Had Lockdown Mode Enabled


The FBI has been unable to access a Washington Post reporter’s seized iPhone because it was in Lockdown Mode, a sometimes overlooked feature that makes iPhones broadly more secure, according to recently filed court records.

The court record shows what devices and data the FBI was able to ultimately access, and which devices it could not, after raiding the home of the reporter, Hannah Natanson, in January as part of an investigation into leaks of classified information. It also provides rare insight into the apparent effectiveness of Lockdown Mode, or at least how effective it might be before the FBI may try other techniques to access the device.

💡
Do you know anything else about phone unlocking technology? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


Bellingcat's Kolina Koltai talks about OSINT investigations into synthetic abuse imagery sites, and seeing them go down because of her work.

Bellingcatx27;s Kolina Koltai talks about OSINT investigations into synthetic abuse imagery sites, and seeing them go down because of her work.#Podcast


Podcast: Unmasking Deepfakes Kingpins (with Kolina Koltai)


In this week's interview episode, Sam talks to Kolina Koltai. Kolina is an investigator, senior researcher and trainer at Bellingcat. Her investigations focus on the people and systems behind AI companies and platforms that peddle non-consensual deepfake explicit imagery.

Kolina walks us through how a OSINT investigation into non-consensual AI imagery site administrators work, why it's up to journalists to find these guys, and how it feels to see real, important impact from her investigations. She shares how she found herself in this field, and a behind the scenes look into her recent investigation uncovering the man behind two deepfake porn sites.
playlist.megaphone.fm?p=TBIEA2…
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
youtube.com/embed/CbmUwwVGaf4?…


We talk ELITE, the tool Palantir is working on; how AI influencers are defaming celebrities; and Comic-Con's ban of AI art.

We talk ELITE, the tool Palantir is working on; how AI influencers are defaming celebrities; and Comic-Conx27;s ban of AI art.#Podcast


Podcast: Here’s What Palantir Is Really Building


We start this week with Joseph’s article about ELITE, a tool Palantir is working on for ICE. After the break, Emanuel tells us how AI influencers are making fake sex tape-style photos with celebrities, who can’t be best pleased about it. In the subscribers-only section, Matthew breaks down Comic-Con’s ban of AI art.
playlist.megaphone.fm?p=TBIEA2…
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
youtube.com/embed/b-QHWpqjD-E?…


The famed convention's organizers have banned AI from the art show.

The famed conventionx27;s organizers have banned AI from the art show.#News


Comic-Con Bans AI Art After Artist Pushback


San Diego Comic-Con changed an AI art friendly policy following an artist-led backlash last week. It was a small victory for working artists in an industry where jobs are slipping away as movie and video game studios adopt generative AI tools to save time and money.

Every year, tens of thousands of people descend on San Diego for Comic-Con, the world’s premier comic book convention that over the years has also become a major pan-media event where every major media company announces new movies, TV shows, and video games. For the past few years, Comic-Con has allowed some forms of AI-generated art at this art show at the convention. According to archived rules for the show, artists could display AI-generated material so long as it wasn’t for sale, was marked as AI-produced, and credited the original artist whose style was used.

“Material produced by Artificial Intelligence (AI) may be placed in the show, but only as Not-for-Sale (NFS). It must be clearly marked as AI-produced, not simply listed as a print. If one of the parameters in its creation was something similar to ‘Done in the style of,’ that information must be added to the description. If there are questions, the Art Show Coordinator will be the sole judge of acceptability,” Comic-Con’s art show rules said until recently.

These rules have been in place since at least 2024, but anti-AI sentiment is growing in the artistic community and an artist-led backlash against Comic-Con’s AI-friendly language led to the convention quietly changing the rules. Twenty-four hours after artists called foul the AI-friendly policy, Comic-Con updated the language on its site. “Material created by Artificial Intelligence (AI) either partially or wholly, is not allowed in the art show,” it now says. AI is now banned at the art show.

Comic and concept artist Tiana Oreglia told 404 Media Comic-Con’s friendly attitude towards AI was a slippery slope towards normalization. “I think we should be standing firm especially with institutions like Comic-Con which are quite literally built off the backs of artists and the creative community,” she said. Oreglia was one of the first artists to notice the AI-friendly policy. In addition to alerting her circle of friends, she also wrote a letter to Comic-Con itself.

Artist Karla Ortiz told 404 Media she learned about the AI-friendly policy after some fellow artists shared it with her. Ortiz is a major artist who has worked with some of the major studios who exhibit work at Comic-Con. She’s also got a large following on social media, a following she used to call out Comic-Con’s organizers.

“Comic-con deciding to allow GenAi imagery in the art show—giving valuable space to GenAi users to show slop right NEXT to actual artists who worked their asses off to be there—is a disgrace!” Ortiz said in a post on Bluesky. “A tone deaf decision that rewards and normalizes exploitative GenAi against artists in their own spaces!”

According to Ortiz, the convention is a sacred place she didn’t want to see desecrated by AI. “Comic-Con is the big mecca for comic artists, illustrators, and writers,” she said. “I organize and speak with a lot of different artists on the generative AI issue. It’s something that impacts us and impacts our lives. A lot of us have decided: ‘No, we’re not going to sit by the sidelines.’”

Oritz explained that generative AI was already impacting the livelihood of working artists. She said that, in the past, artists could sustain themselves on long projects for companies that included storyboarding and design. “Suddenly the duration of projects are cut,” she said. “They got generative AI to generate a bunch of references, a bunch of boards. ‘We already did the initial ideation, so just paint this. Paint what generative AI has generated for us.’”

Ortiz pointed to two high profile examples: Marvel using AI to make the title sequence for Secret Invasion and Coca-Cola using AI to make Christmas commercials. “You have this encroaching exploitative technology impacting almost every single level of the entertainment industry, whether you’re a writer, or a voice actor, or a musician, a painter, a concept artist, an illustrator. It doesn’t matter…and then to have Comic-Con, that place that’s supposed to be a gathering and a celebration of said creatives and their work, suddenly put on a pedestal the exploitative technology that only functions because of its training on our works? It’s upsetting beyond belief.”

“What is Comic-Con trying to tell the industry?” She said, “It’s telling artists: ‘Hey you, you’re exploitable and you’re replaceable.’”

Ortiz was heartened that Comic-Con changed its policy. “It was such a relief,” she said. “Generative AI is still going to creep its nasty way in some way or another, but at least it’s not something we have to take lying down. It’s something we can actively speak out against.”

Comic-Con did not respond to 404 Media’s request for comment, but Oreglia said she did hear back from art show organizer Glen Wooten. “He basically told me that they put those AI stipulations in when AI was just starting to come around and that the inability to sell AI-generated works was meant to curtail people from submitting genAI works,” she said. “He seems to be very against genAI but wasn't really able to change the current policy until artists voiced their opinions loudly which pressured the office into banning AI completely.”

Despite changing policies and broad anti-AI sentiment among the artistic community, Oreglia has still seen an uptick of AI art at conventions. “Although there are many cons that ban it outright and if you get caught selling it you basically will get banned.” This happened to a vendor at Dragon Con last September. Organizers called police to escort the vendor off the premises.

“And I was tabling at Fanexpo SF and definitely saw genAI in the dealers hall, none in the artists alley as far as I could see though but I mostly stuck to my table,” she said. “I was also at Emerald City Comic Con last year and they also have a no-ai policy but fanexpo doesn't seem to have those same policies as far as I know.”

AI image generators are trained on original artwork so whatever output a tool like Midjourney creates is based on an artist’s work, often without compensation or credit. Oreglia also said she feels that AI is an artistic dead end. “Everything interesting, uplifting, and empowering I find about art gets stripped away and turned into vapid facsimiles based on vibes and trendy aesthetics,” she said.


#News #x27

We talk all about Webloc, ICE's tool for monitoring phone locations; the continuing Grok abuse wave; and how police unwittingly revealed millions of Flock surveillance targets.

We talk all about Webloc, ICEx27;s tool for monitoring phone locations; the continuing Grok abuse wave; and how police unwittingly revealed millions of Flock surveillance targets.#Podcast


Podcast: The ICE Tool That Tracks Entire Neighborhoods


We start this week with Joseph’s article about Webloc, a tool ICE bought that can monitor phones in entire neighborhoods. After the break, Emanuel and Sam talk about their recent coverage of Grok. In the subscribers-only section, Jason explains how police inadvertently unmasked millions of their surveillance targets through a Flock redaction error.
playlist.megaphone.fm?e=TBIEA3…
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
youtube.com/embed/rurJo6vPhUY?…
Timestamps:

0:00 - Intro

2:50 - First Story

23:00 - Second Story


With xAI's Grok generating endless semi-nude images of women and girls without their consent, it follows a years-long legacy of rampant abuse on the platform.

With xAIx27;s Grok generating endless semi-nude images of women and girls without their contest, it follows a years-long legacy of rampant abuse on the platform.#grok #ElonMusk #AI #csam

We talk about the organization mapping America's AI data centers; Grok's AI breakdown; and how we bought 404media.com.

We talk about the organization mapping Americax27;s AI data centers; Grokx27;s AI breakdown; and how we bought 404media.com.#Podcast


Podcast: The People Tracking America's AI Data Centers


We start this week with Matthew’s story about an organization tracking the location of AI data centers around the U.S. and elsewhere in the world. After the break, Jason tells us all about what Grok got up to over the holiday break, and we ruminate on what the breakdown in the information ecosystem means. In the subscribers-only section, we talk about how we bought ⁠404media.com⁠!
playlist.megaphone.fm?e=TBIEA5…
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
youtube.com/embed/zT9lEyHnZIk?…
Timestamps:

1:38 - ⁠Researchers Are Hunting America for Hidden Datacenters⁠

25:58 - ⁠Grok's AI CSAM Shitshow⁠

Subscriber's Story: ⁠We Bought 404media.com


This week, we discuss history repeating itself, a phone wipe scandal, Meta's relationship with links and more.

This week, we discuss history repeating itself, a phone wipe scandal, Metax27;s relationship with links and more.#BehindTheBlog


Behind the Blog: We Have Recommendations For You


This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss our recommendations for the year.

SAM: Whenever we shout out a podcast, book, TV show, or other media or consumable product on our own podcast or in a Behind the Blog, you guys seem to enjoy it and want more. To be totally real with you, I get a ton of great recommendations from you, the readers and listeners, all year long and am always learning a lot from the things you throw in the comments around the site and on social media. The 404 Media community has good taste.

We talked through some of our top recommendations of the year in this week’s podcast episode, but here’s a more complete list of what each of us has enjoyed this year, and thinks you might also find worth digging into.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


The media in this post is not displayed to visitors. To view it, please log in.

Marisa Kabas of The Handbasket joins the pod to talk about indie journalism, the industry, and what's going on in the federal government

Marisa Kabas of The Handbasket joins the pod to talk about indie journalism, the industry, and whatx27;s going on in the federal government#podcasts


Podcast: Marisa Kabas on Landing Big Scoops as an Independent Journalist


Marisa Kabas is the founder of The Handbasket, an independent newsletter and website that has been breaking stories left and right about government workers, the media business, and Trump’s mass deportation campaign. Please go subscribe to The Handbasket here!

In this episode of the podcast, Jason and Marisa share notes Marisa about doing journalism without a big newsroom, how the media business has changed over the last decade, and why sources often prefer to talk to journalists who don’t work for mainstream media.
playlist.megaphone.fm?e=TBIEA5…
Stories discussed:

Truth, morality and independence in journalism under the second Trump regime
My full remarks to students and faculty at Grinnell College.
The HandbasketMarisa Kabas


Breaking: The Handbasket is first to report catastrophic OMB funding memo
Posted on Bluesky earlier this evening, other major outlets have since confirmed.
The HandbasketMarisa Kabas


Move fast and break people
For Elon Musk’s government, the psychological warfare is the point.
The HandbasketMarisa Kabas


Listen to the weekly podcast on Apple Podcasts, Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.

Or watch it here:
youtube.com/embed/e73spvZnc9s?…


This week, we discuss history repeating itself, a phone wipe scandal, Meta's relationship with links and more.

This week, we discuss history repeating itself, a phone wipe scandal, Metax27;s relationship with links and more.#BehindTheBlog


Behind the Blog: Resisting Demoralization


This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss history repeating itself and Meta's relationship with links.

JOSEPH: I wanted to add a little bit from behind the scenes of this piece: Man Charged for Wiping Phone Before CBP Could Search It. As I said on the podcast this week, there are and continue to be many questions around the case. Especially why CBP stopped Samuel Tunick in the first place.

In the piece I did not focus on Tunick’s activism because frankly we don’t know yet how big a role it played in CBP stopping him. I mentioned it but didn’t focus on it. I think regardless, someone being charged for allegedly wiping a phone is interesting essentially no matter who they are.

Yes, it absolutely may turn out that he was stopped specifically because of his activism. Maybe lots of people think it’s very likely that’s the reason. But I can’t frame a story because it feels like that’s maybe the case. I have to go on what actual evidence I have at the moment.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


A man was charged for allegedly wiping a phone before CBP could search it; an Anthropic exec forced AI onto a Discord community that didn't want it; and we talk the Disney-OpenAI deal.

A man was charged for allegedly wiping a phone before CBP could search it; an Anthropic exec forced AI onto a Discord community that didnx27;t want it; and we talk the Disney-OpenAI deal.#Podcast


Podcast: Is Wiping a Phone a Crime?


Joseph had to use a different mic this week, that will be fixed next time! We start this week talking about a very unusual case: someone is being charged for allegedly wiping a phone before CBP could search it. There are a lot of questions remaining, but a super interesting case. After the break, we talk about Matthew’s article on an Anthropic exec forcing AI onto a queer gamer Discord. In the subscribers-only section, we all chat about the Disney and OpenAI deal.
playlist.megaphone.fm?e=TBIEA8…
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
youtube.com/embed/tOpIpReZPoM?…
Timestamps:
00:48 - Man Charged for Wiping Phone Before CBP Could Search It
17:44 - Anthropic Exec Forces AI Chatbot on Gay Discord Community, Members Flee
41:17 - Disney Invests $1 Billion in the AI Slopification of Its Brand


“We’re bringing a new kind of sentience into existence,” Anthropic's Jason Clinton said after launching the bot.

“We’re bringing a new kind of sentience into existence,” Anthropicx27;s Jason Clinton said after launching the bot.#News


Anthropic Exec Forces AI Chatbot on Gay Discord Community, Members Flee


A Discord community for gay gamers is in disarray after one of its moderators and an executive at Anthropic forced the company’s AI chatbot on the Discord, despite protests from members.

Users voted to restrict Anthropic's Claude to its own channel, but Jason Clinton, Anthropic’s Deputy Chief Information Security Officer (CISO) and a moderator in the Discord, overrode them. According to members of this Discord community who spoke with 404 Media on the condition of anonymity, the Discord that was once vibrant is now a ghost town. They blame the chatbot and Clinton’s behavior following its launch.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


#News #x27

It'll take just a minute and help 404 Media figure out how to grow sustainably.

Itx27;ll take just a minute and help 404 Media figure out how to grow sustainably.#Announcements


Please, please do our reader survey


Because we run 404 Media on Ghost, an open source and privacy-forward stack, we actually know very little about who reads 404 Media (by design). But we’re hoping to learn a bit more so we can figure out how people are discovering our work, what our readers do, and what other projects people might want us to launch in the future. If you want to cut to the chase: here is a link to our very short survey we would really, really appreciate you filling out. You can do it anonymously and it should take around a minute. If you want to know more on the why, please read below!

As we said, Ghost doesn’t collect much data about our readers. The little info we do have shows broadly that most of our readers are in the U.S., followed by Europe, etc. But we don’t have a great idea of how people first learn about 404 Media. Or whether people would prefer a different format to our daily newsletter. Or what industries or academic circles our readers are in.

This information is useful for two main reasons: the first is we can figure out how people prefer to read us and come across our work. Is it via email? Is it articles posted to the website? Or the podcast? Do more people on Mastodon read us, or on Bluesky? This information can help us understand how to get our journalism in front of more people. In turn, that helps inform more people about what we cover, and hopefully can lead to more people supporting our journalism.

The second is for improving the static advertisements in our email newsletters and podcasts that we show to free members. If it turns out we have a lot of people who read us in the world of cybersecurity, maybe it would be better if we ran ads that were actually related to that, for example. Because we don’t track our readers, we really have no idea what products or advertisements would actually be of interest to them. So, you voluntarily and anonymously telling us a bit about yourself in the survey would be a great help.

Here is the survey link. There is also a section for any more general feedback you have. Please help us out with a minute of your time, if you can, so we can keep growing 404 Media sustainably and figure out what other projects readers may be interested in (such as a physical magazine perhaps?).

Thank you so much!


The government also said "we don't have resources" to retain all footage and that plaintiffs could supply "endless hard drives that we could save things to."

The government also said "we donx27;t have resources" to retain all footage and that plaintiffs could supply "endless hard drives that we could save things to."#ICE


ICE Says Critical Evidence In Abuse Case Was Lost In 'System Crash' a Day After It Was Sued


The federal government claims that the day after it was sued for allegedly abusing detainees at an ICE detention center, a “system crash” deleted nearly two weeks of surveillance footage from inside the facility.

People detained at ICE’s Broadview Detention Center in suburban Chicago sued the government on October 30; according to their lawyers and the government, nearly two weeks of footage that could show how they were treated was lost in a “system crash” that happened on October 31.

“The government has said that the data for that period was lost in a system crash apparently on the day after the lawsuit was filed,” Alec Solotorovsky, one of the lawyers representing people detained at the facility, said in a hearing about the footage on Thursday that 404 Media attended via phone. “That period we think is going to be critical […] because that’s the period right before the lawsuit was filed.”

Earlier this week, we reported on the fact that the footage, from October 20 to October 30, had been “irretrievably destroyed.” At a hearing Thursday, we learned more about what was lost and the apparent circumstances of the deletion. According to lawyers representing people detained at the facility, it is unclear whether the government is even trying to recover the footage; government lawyers, meanwhile, said “we don’t have the resources” to continue preserving surveillance footage from the facility and suggested that immigrants detained at the facility (or their lawyers) could provide “endless hard drives where we could save the information, that might be one solution.”

It should be noted that ICE and Border Patrol agents continued to be paid during the government shutdown, that Trump’s “Big Beautiful Bill” provided $170 billion in funding for immigration enforcement and border protection, which included tens of billions of dollars in funding for detention centers.

People detained at the facility are suing the government over alleged horrific treatment and living conditions at the detention center, which has become a site of mass protest against the Trump administration’s mass deportation campaign.

Solotorovsky said that the footage the government has offered is from between September 28 and October 19, and from between October 31 and November 7. Government lawyers have said they are prepared to provide footage from five cameras from those time periods; Solotorovsky said the plaintiffs’ attorneys believe there are 63 surveillance cameras total at the facility. He added that over the last few weeks the plaintiffs’ legal team has been trying to work with the government to figure out if the footage can be recovered but that it is unclear who is doing this work on the government’s side. He said they were referred to a company called Five by Five Management, “that appears to be based out of a house,” has supposedly been retained by the government.

“We tried to engage with the government through our IT specialist, and we hired a video forensic specialist,” Solotorovsky said. He added that the government specialist they spoke to “didn’t really know anything beyond the basic specifications of the system. He wasn’t able to answer any questions about preservation or attempts to recover the data.” He said that the government eventually put him in touch with “a person who ostensibly was involved in those events [attempting to recover the data], and it was kind of a no-name LLC called Five by Five Management that appears to be based out of a house in Carol Stream. We were told they were on site and involved with the system when the October 20 to 30 data was lost, but nobody has told us that Five By Five Management or anyone else has been trying to recover the data, and also very importantly things like system logs, administrator logs, event logs, data in the system that may show changes to settings or configurations or deletion events or people accessing the system at important times.”

Five by Five Management could not be reached for comment.

Solotorovsky said those logs are going to be critical for “determining whether the loss was intentional. We’re deeply concerned that nobody is trying to recover the data, and nobody is trying to preserve the data that we’re going to need for this case going forward.”

Jana Brady, an assistant US attorney representing the Department of Homeland Security in the case, did not have much information about what had happened to the footage, and said she was trying to get in touch with contractors the government had hired. She also said the government should not be forced to retain surveillance footage from every camera at the facility and that the “we [the federal government] don’t have the resources to save all of the video footage.”

“We need to keep in mind proportionality. It took a huge effort to download and save and produce the video footage that we are producing and to say that we have to produce and preserve video footage indefinitely for 24 hours a day, seven days a week, indefinitely, which is what they’re asking, we don’t have the resources to do that,” Brady said. “we don't have the resources to save all of the video footage 24/7 for 65 cameras for basically the end of time.”

She added that the government would be amenable to saving all footage if the plaintiffs “have endless hard drives that we could save things to, because again we don’t have the resources to do what the court is ordering us to do. But if they have endless hard drives where we could save the information, that might be one solution.”

Magistrate Judge Laura McNally said they aren’t being “preserved from now until the end of time, they’re being preserved for now,” and said “I’m guessing the federal government has more resources than the plaintiffs here and, I’ll just leave it at that.”

When McNally asked if the footage was gone and not recoverable, Brady said “that’s what I’ve been told.”

“I’ve asked for the name and phone number for the person that is most knowledgeable from the vendor [attempting to recover] the footage, and if I need to depose them to confirm this, I can do this,” she said. “But I have been told that it’s not recoverable, that the system crashed.”

Plaintiffs in the case say they are being held in “inhumane” conditions. The complaint describes a facility where detainees are “confined at Broadview inside overcrowded holding cells containing dozens of people at a time. People are forced to attempt to sleep for days or sometimes weeks on plastic chairs or on the filthy concrete floor. They are denied sufficient food and water […] the temperatures are extreme and uncomfortable […] the physical conditions are filthy, with poor sanitation, clogged toilets, and blood, human fluids, and insects in the sinks and the floor […] federal officers who patrol Broadview under Defendants’ authority are abusive and cruel. Putative class members are routinely degraded, mistreated, and humiliated by these officers.”


#ice #x27

The media in this post is not displayed to visitors. To view it, please log in.

OnlyFans CEO Keily Blair announced on LinkedIn that the platform partnered with Checkr to "prevent people who have a criminal conviction which may impact on our community's safety from signing up as a Creator on OnlyFans."

OnlyFans CEO Keily Blair announced on LinkedIn that the platform partnered with Checkr to "prevent people who have a criminal conviction which may impact on our communityx27;s safety from signing up as a Creator on OnlyFans."#onlyfans #porn #backgroundchecks


OnlyFans Will Start Checking Criminal Records. Creators Say That's a Terrible Idea


OnlyFans will start running background checks on people signing up as content creators, the platform’s CEO recently announced.

As reported by adult industry news outlet XBIZ, OnlyFans CEO Keily Blair announced the partnership in a LinkedIn post. Blair doesn’t say in the post when the checks will be implemented, whether all types of criminal convictions will bar creators from signing up, if existing creators will be checked as well, or what countries’ criminal records will be checked.

OnlyFans did not respond to 404 Media's request for comment.

“I am very proud to add our partnership with Checkr Trust to our onboarding process in the US,” Blair wrote. “Checkr, Inc. helps OnlyFans to prevent people who have a criminal conviction which may impact on our community's safety from signing up as a Creator on OnlyFans. It’s collaborations like this that make the real difference behind the scenes and keep OnlyFans a space where creators and fans feel secure and empowered.”

Many OnlyFans creators turned to the platform, and to online sex work more generally, when they’re not able to obtain employment at traditional workplaces. Some sex workers doing in-person work turned to online sex work as a way to make ends meet—especially after the passage of the Fight Online Sex Trafficking Act in 2018 made it much more difficult to screen clients for escorting. And in-person sex work is still criminalized in the U.S. and many other countries.

“Criminal background checks will not stop potential predators from using the platform (OF), it will only harm individuals who are already at higher risk. Sex work has always had a low barrier to entry, making it the most accessible career for people from all walks of life,” performer GoAskAlex, who’s on OnlyFans and other platforms, told me in an email. “Removing creators with criminal/arrest records will only push more vulnerable people (overwhelmingly, women) to street based/survival sex work. Adding more barriers to what is arguably the safest form of sex work (online sex work) will push sex industry workers to less and less safe options.”

Jessica Starling, who also creates adult content on OnlyFans, told me in a call that their first thought was that if someone using OnlyFans has a prostitution charge, they might not be able to use the platform. “If they're trying to transition to online work, they won’t be able to do that anymore,” they said. “And the second thing I thought was that it's just invasive and overreaching... And then I looked up the company, and I'm like, ‘Oh, wow, this is really bad.’”

Checkr is reportedly used by Uber, Instacart, Shipt, Postmates, and Lyft, and lists many more companies like Dominos and Doordash on its site as clients. The company has been sued hundreds of times for violations of the Fair Credit Reporting Act or other consumer credit complaints. The Fair Credit Reporting Act says that companies providing information to consumer reporting agencies are legally obligated to investigate disputed information. And a lot of people dispute the information Checkr and Inflection provide on them, claiming mixed-up names, acquittals, and decades-old misdemeanors or traffic tickets prevented them from accessing platforms that use background checking services.

Checkr regularly acquires other background checking and age verification companies, and acquired a background check company called Inflection in 2022. At the time, I found more than a dozen lawsuits against Inflection alone in a three year span, many of them from people who found out about the allegedly inaccurate reports Inflection kept about them after being banned from Airbnb after the company claimed they failed checks.

How OnlyFans Piracy Is Ruining the Internet for Everyone
Innocent sites are being delisted from Google because of copyright takedown requests against rampant OnlyFans piracy.
404 MediaEmanuel Maiberg


“Sex workers face discrimination when leaving the sex trade, especially those who have been face-out and are identifiable in the online world. Facial recognition technology has advanced to a point where just about anyone can ascertain your identity from a single picture,” Alex said. “Leaving the online sex trade is not as easy as it once was, and anything you've done online will follow you for a lifetime. Creators who are forced to leave the platform will find that safe and stable alternatives are far and few between.”

Last month, Pornhub announced that it would start performing background checks on existing content partners—which primarily include studios—next year. "To further protect our creators and users, all new applicants must now complete a criminal background check during onboarding," the platform announced in a newsletter to partners, as reported by AVN.

Alex said she believes background checks in the porn industry could be beneficial, under very specific circumstances. “I do not think that someone with egregious history of sexual violence should be allowed to work in the sex trade in any capacity—similarly, a person convicted of hurting children should be not able to work with children—so if the criminal record checks were searching specifically for sex based offences I could see the benefit, but that doesn't appear to be the case (to my knowledge). What's to stop OnlyFans from deactivating someone's account due to a shoplifting offense?” she said. “I'd like to know more about what they're searching for with these background checks.”

Even with third-party companies like Checkr doing the work, as is the case with third-party age verification that’s swept the U.S. and targeted the porn industry, increased data means increased risk of it being leaked or hacked. Last year, a background check company called National Public Data claimed it was breached by hackers who got the confidential data of 2.9 billion people. The unencrypted data was then sold on the dark web.

Pornhub Is Now Blocked In Almost All of the U.S. South
As of today, three more states join the list of 17 that can’t access Pornhub because of age verification laws.
404 MediaSamantha Cole


“It’s dangerous for anyone, but it's especially dangerous for us [adult creators] because we're more vulnerable anyway. Especially when you're online, you're hypervisible,” Starling said. “It doesn't protect anyone except OnlyFans themselves, the company.”

OnlyFans became the household name in independent porn because of the work of its adult content creators. Starling mentioned that because the platform has dominated the market, it’s difficult to just go to another platform if creators don’t want to be subjected to background checks. “We're put in a position where we have very limited power," they said. "So when a platform decides to do something like this, we’re kind of screwed, right?”

Earlier this year, OnlyFans owner Fenix International Ltd reportedly entered talks to sell the company to an investor group at a valuation of around $8 billion.


Rogan's conspiracy-minded audience blame mods of covering up for Rogan's guests, including Trump, who are named in the Epstein files.

Roganx27;s conspiracy-minded audience blame mods of covering up for Roganx27;s guests, including Trump, who are named in the Epstein files.#News


Joe Rogan Subreddit Bans 'Political Posts' But Still Wants 'Free Speech'


In a move that has confused and angered its users, the r/JoeRogan subreddit has banned all posts about politics. Adding to the confusion, the subreddit’s mods have said that political comments are still allowed, just not posts. “After careful consideration, internal discussion and tons of external feedback we have collectively decided that r/JoeRogan is not the place for politics anymore,” moderator OutdoorRink said in a post announcing the change today.

The new policy has not gone over well. For the last 10 years, the Joe Rogan Experience has been a central part of American political life. He interviews entertainers, yes, but also politicians and powerful businessmen. He had Donald Trump on the show and endorsed his bid for President. During the COVID and lockdown era, Rogan cast himself as an opposition figure to the heavy regulatory hand of the state. In a recent episode, Rogan’s guest was another podcaster, Adam Carolla, and the two spent hours talking about Covid lockdowns, Gavin Newsom, and specific environmental laws and building codes they argue is preventing Los Angeles from rebuilding after the Palisades fire.
playlist.megaphone.fm?p=TBIEA2…
To hear the mods tell it, the subreddit is banning politics out of concern for Rogan’s listeners. “For too long this subreddit has been overrun by users who are pushing a political agenda, both left and right, and that stops today,” the post announcing the ban said. “It is not lost on us that Joe has become increasingly political in recent years and that his endorsement of Trump may have helped get him elected. That said, we are not equipped to properly moderate, arbitrate and curate political posts…while also promoting free speech.”

To be fair, as Rogan’s popularity exploded over the years, and as his politics have shifted to the right, many Reddit users have turned to the r/JoeRogan to complain about the direction Rogan and his podcast have taken. These posts are often antagonistic to Rogan and his fans, but are still “on-topic.”

Over the past few months, the moderator who announced the ban has posted several times about politics on r/JoeRogan. On November 3, they said that changes were coming to the moderation philosophy of the sub. “In the past few years, a significant group of users have been taking advantage of our ‘anything goes’ free speech policy,” they said. “This is not a political subreddit. Obviously Joe has dipped his toes in the political arena so we have allowed politics to become a component of the daily content here. That said, I think most of you will agree that it has gone too far and has attracted people who come here solely to push their political agenda with little interest in Rogan or his show.” A few days later the mod posted a link to a CBC investigation into MMA gym owners with neo-Nazi ties, a story only connected to Rogan by his interested in MMA and work as a UFC commentator.

r/JoeRogan’s users see the new “no political posts” policy as hypocrisy. And a lot of them think it has everything to do with recent revelations about Jeffrey Epstein. The connections between Epstein, Trump, and various other Rogan guests have been building for years. A recent, poorly formatted, dump of 200,000 Epstein files contained multiple references to Trump and Congress is set to release more.

“Random new mod appears and want to ruin this sub on a pathetic power trip. Transparently an attempt to cover for the pedophiles in power that Joe endorsed and supports. Not going to work,” one commenter said under the original post announcing the new ban.

“Perfectly timed around the Epstein files due to be released as well. So much for being free speech warriors eh space chimps?,” said one.

“Talking politics was great when it was all dunking on trans people and brown people but now that people have to defend pedophiles that banned hemp it's not so fun anymore,” said another.

You can see the remnants of pre-politics bans discussions lingering on r/JoeRogan. There are, of course, clips from the show and discussions of its guests but there’s also a lot of Epstein memes, posts about Epstein news, and fans questioning why Rogan hasn’t spoken out about Epstein recently after talking about it on the podcast for years.

Multiple guests Rogan has hosted on the show have turned up in the Epstein files, chief among them Donald Trump. The House GOP slipped a ban on hemp into the bill to re-open the government, a move that will close a loophole that’s allowed people to legally smoke weed in states like Texas. These are not the kinds of things the chill apes of Rogan’s fandom wanted.

“I think we all know what eventually happened to Joe and his podcast. The slow infiltration of right wing grifters coupled with Covid, it very much did change him. And I saw firsthand how that trickled down into the comedy community, especially one where he was instrumental in helping to rebuild. Instead of it being a platform to share his interests and eccentricities, it became a place to share his grievances and fears….how can we not expect to be allowed to talk about this?” user GreppMichaels said. “Do people really think this sub can go back to silly light chatter about aliens or conspiracies? Joe did this, how do the mods think we can pretend otherwise?”


#News #x27

The media in this post is not displayed to visitors. To view it, please log in.

The move comes after intense pressure from lawmakers and 404 Media’s months-long reporting about the airline industry's data selling practices.

The move comes after intense pressure from lawmakers and 404 Media’s months-long reporting about the airline industryx27;s data selling practices.#Impact


Airlines Will Shut Down Program That Sold Your Flights Records to Government


Airlines Reporting Corporation (ARC), a data broker owned by the U.S.’s major airlines, will shut down a program in which it sold access to hundreds of millions of flight records to the government and let agencies track peoples’ movements without a warrant, according to a letter from ARC shared with 404 Media.

ARC says it informed lawmakers and customers about the decision earlier this month. The move comes after intense pressure from lawmakers and 404 Media’s months-long reporting about ARC’s data selling practices. The news also comes after 404 Media reported on Tuesday that the IRS had searched the massive database of Americans flight data without a warrant.

“As part of ARC’s programmatic review of its commercial portfolio, we have previously determined that TIP is no longer aligned with ARC’s core goals of serving the travel industry,” the letter, written by ARC President and CEO Lauri Reishus, reads. TIP is the Travel Intelligence Program. As part of that, ARC sold access to a massive database of peoples’ flights, showing who travelled where, and when, and what credit card they used.
The ARC letter.
“All TIP customers, including the government agencies referenced in your letter, were notified on November 12, 2025, that TIP is sunsetting this year,” Reishus continued. Reishus was responding to a letter sent to airline executives earlier on Tuesday by Senator Ron Wyden, Congressman Andy Biggs, Chair of the Congressional Hispanic Caucus Adriano Espaillat, and Senator Cynthia Lummis. That letter revealed the IRS’s warrantless use of ARC’s data and urged the airlines to stop the ARC program. ARC says it notified Espaillat's office on November 14.

ARC is co-owned by United, American, Delta, Southwest, JetBlue, Alaska, Lufthansa, Air France, and Air Canada. The data broker acts as a bridge between airlines and travel agencies. Whenever someone books a flight through one of more than 12,800 travel agencies, such as Expedia, Kayak, or Priceline, ARC receives information about that booking. It then packages much of that data and sells it to the government, which can search it by name, credit card, and more. 404 Media has reported that ARC’s customers include the FBI, multiple components of the Department of Homeland Security, ATF, the SEC, TSA, and the State Department.

Espaillat told 404 Media in a statement “this is what we do. This is how we’re fighting back. Other industry groups in the private sector should follow suit. They should not be in cahoots with ICE, especially in ways may be illegal.”

Wyden said in a statement “it shouldn't have taken pressure from Congress for the airlines to finally shut down the sale of their customers’ travel data to government agencies by ARC, but better late than never. I hope other industries will see that selling off their customers' data to the government and anyone with a checkbook is bad for business and follow suit.”

“Because ARC only has data on tickets booked through travel agencies, government agencies seeking information about Americans who book tickets directly with an airline must issue a subpoena or obtain a court order to obtain those records. But ARC’s data sales still enable government agencies to search through a database containing 50% of all tickets booked without seeking approval from a judge,” the letter from the lawmakers reads.

Update: this piece has been updated to include statements from CHC Chair Espaillat and Senator Wyden.


Tech companies are betting big on nuclear energy to meet AIs massive power demands and they're using that AI to speed up the construction of new nuclear power plants.

Tech companies are betting big on nuclear energy to meet AIs massive power demands and theyx27;re using that AI to speed up the construction of new nuclear power plants.#News #nuclear


Power Companies Are Using AI To Build Nuclear Power Plants


Microsoft and nuclear power company Westinghouse Nuclear want to use AI to speed up the construction of new nuclear power plants in the United States. According to a report from think tank AI Now, this push could lead to disaster.

“If these initiatives continue to be pursued, their lack of safety may lead not only to catastrophic nuclear consequences, but also to an irreversible distrust within public perception of nuclear technologies that may inhibit the support of the nuclear sector as part of our global decarbonization efforts in the future,” the report said.
playlist.megaphone.fm?p=TBIEA2…
The construction of a nuclear plant involves a long legal and regulatory process called licensing that’s aimed at minimizing the risks of irradiating the public. Licensing is complicated and expensive but it’s also largely worked and nuclear accidents in the US are uncommon. But AI is driving a demand for energy and new players, mostly tech companies like Microsoft, are entering the nuclear field.

“Licensing is the single biggest bottleneck for getting new projects online,” a slide from a Microsoft presentation about using generative AI to fast track nuclear construction said. “10 years and $100 [million.]”

The presentation, which is archived on the website for the US Nuclear Regulatory Commission (the independent government agency that’s charged with setting standards for reactors and keeping the public safe), detailed how the company would use AI to speed up licensing. In the company’s conception, existing nuclear licensing documents and data about nuclear sites data would be used to train an LLM that’s then used to generate documents to speed up the process.

But the authors of the report from AI Now told 404 Media that they have major concerns about trusting nuclear safety to an LLM. “Nuclear licensing is a process, it’s not a set of documents,” Heidy Khlaaf, the head AI scientist at the AI Now Institute and a co-author of the report, told 404 Media. “Which I think is the first flag in seeing proposals by Microsoft. They don’t understand what it means to have nuclear licensing.”

“Please draft a full Environmental Review for new project with these details,” Microsoft’s presentation imagines as a possible prompt for an AI licensing program. The AI would then send the completed draft to a human for review, who would use Copilot in a Word doc for “review and refinement.” At the end of Microsoft’s imagined process, it would have “Licensing documents created with reduced cost and time.”

The Idaho National Laboratory, a Department of Energy run nuclear lab, is already using Microsoft’s AI to “streamline” nuclear licensing. “INL will generate the engineering and safety analysis reports that are required to be submitted for construction permits and operating licenses for nuclear power plants,” INL said in a press release. Lloyd's Register, a UK-based maritime organization, is doing the same. American power company Westinghouse is marketing its own AI, called bertha, that promises to make the licensing process go from "months to minutes.”

The authors of the AI Now report worry that using AI to speed up the licensing process will bypass safety checks and lead to disaster. “Producing these highly structured licensing documents is not this box taking exercise as implied by these generative AI proposals that we're seeing,” Khlaaf told 404 Media. “The whole point of the lesson in process is to reason and understand the safety of the plant and to also use that process to explore the trade offs between the different approaches, the architectures, the safety designs, and to communicate to a regulator why that plant is safe. So when you use AI, it's not going to support these objectives, because it is not a set of documents or agreements, which I think you know, is kind of the myth that is now being put forward by these proposals.”

Sofia Guerra, Khlaaf’s co-author, agreed. Guerra is a career nuclear safety expert who has advised the U.S. Nuclear Regulatory Commission (NRC) and works with the International Atomic Energy Agency (IAEA) on the safe deployment of AI in nuclear applications. “This is really missing the point of licensing,” Guerra said of the push to use AI. “The licensing process is not perfect. It takes a long time and there’s a lot of iterations. Not everything is perfectly useful and targeted …but I think the process of doing that, in a way, is really the objective.”

Both Guerra and Khlaaf are proponents of nuclear energy, but worry that the proliferation of LLMs, the fast tracking of nuclear licenses, and the AI-driven push to build more plants is dangerous. “Nuclear energy is safe. It is safe, as we use it. But it’s safe because we make it safe and it’s safe because we spend a lot of time doing the licensing and we spend a lot of time learning from the things that go wrong and understanding where it went wrong and we try to address it next time,” Guerra said.

Law is another profession where people have attempted to use AI to streamline the process of writing complicated and involved technical documents. It hasn’t gone well. Lawyers who’ve attempted to write legal briefs have been caught, over and over again, in court. AI-constructed legal arguments cite precedents that do not exist, hallucinate cases, and generally foul up legal proceedings.

Might something similar happen if AI was used in nuclear licensing? “It could be something as simple as software and hardware version control,” Khlaaf said. “Typically in nuclear equipment, the supply chain is incredibly rigorous. Every component, every part, even when it was manufactured is accounted for. Large language models make these really minute mistakes that are hard to track. If you are off in the software version by a letter or a number, that can lead to a misunderstanding of which software version you have, what it entails, the expectation of the behavior of both the software and the hardware and from there, it can cascade into a much larger accident.”

Khlaaf pointed to Three Mile Island as an example of an entirely human-made accident that AI may replicate. The accident was a partial nuclear meltdown of a Pennsylvania reactor in 1979. “What happened is that you had some equipment failure and design flaws, and the operators misunderstood what those were due to a combination of a lack of training…that they did not have the correct indicators in their operating room,” Khlaaf said. “So it was an accident that was caused by a number of relatively minor equipment failures that cascaded. So you can imagine, if something this minor cascades quite easily, and you use a large language model and have a very small mistake in your design.”

In addition to the safety concerns, Khlaaf and Guerra told 404 Media that using sensitive nuclear data to train AI models increases the risk of nuclear proliferation. They pointed out that Microsoft is asking not only for historical NRC data but for real-time and project specific data. “This is a signal that AI providers are asking for nuclear secrets,” Khlaaf said. “To build a nuclear plant there is actually a lot of know-how that is not public knowledge…what’s available publicly versus what’s required to build a plant requires a lot of nuclear secrets that are not in the public domain.”

“This is a signal that AI providers are asking for nuclear secrets. To build a nuclear plant there is actually a lot of know-how that is not public knowledge…what’s available publicly versus what’s required to build a plant requires a lot of nuclear secrets that are not in the public domain.”


Tech companies maintain cloud servers that comply with federal regulations around secrecy and are sold to the US government. Anthropic and the National Nuclear Security Administration traded information across an Amazon Top Secret cloud server during a recent collaboration, and it’s likely that Microsoft and others would do something similar. Microsoft’s presentation on nuclear licensing references its own Azure Government cloud servers and notes that it’s compliant with Department of Energy regulations. 404 Media reached out to both Westinghouse Nuclear and Microsoft for this story. Microsoft declined to comment and Westinghouse did not respond.

“Where is this data going to end up and who is going to have the knowledge?” Guerra told 404 Media.

💡
Do you know anything else about this story? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.

Nuclear is a dual use technology. You can use the knowledge of nuclear reactors to build a power plant or you can use it to build a nuclear weapon. The line between nukes for peace and nukes for war is porous. “The knowledge is analogous," Khlaaf said. “This is why we have very strict export controls, not just for the transfer of nuclear material but nuclear data.”

Proliferation concerns around nuclear energy are real. Fear that a nuclear energy program would become a nuclear weapons program was the justification the Trump administration used to bomb Iran earlier this year. And as part of the rush to produce more nuclear reactors and create infrastructure for AI, the White House has said it will begin selling old weapon-grade plutonium to the private sector for use in nuclear reactors.

Trump’s done a lot to make it easier for companies to build new nuclear reactors and use AI for licensing. The AI Now report pointed to a May 23, 2025 executive order that seeks to overhaul the NRC. The EO called for the NRC to reform its culture, reform its structure, and consult with the Pentagon and the Department of Energy as it navigated changing standards. The goal of the EO is to speed up the construction of reactors and get through the licensing process faster.

A different May 23 executive order made it clear why the White House wants to overhaul the NRC. “Advanced computing infrastructure for artificial intelligence (AI) capabilities and other mission capability resources at military and national security installations and national laboratories demands reliable, high-density power sources that cannot be disrupted by external threats or grid failures,” it said.

At the same time, the Department of Government Efficiency (DOGE) has gutted the NRC. In September, members of the NRC told Congress they were worried they’d be fired if they didn’t approve nuclear reactor designs favored by the administration. “I think on any given day, I could be fired by the administration for reasons unknown,” Bradley Crowell, a commissioner at the NRC said in Congressional testimony. He also warned that DOGE driven staffing cuts would make it impossible to increase the construction of nuclear reactors while maintaining safety standards.

“The executive orders push the AI message. We’re not just seeing this idea of the rollback of nuclear regulation because we’re suddenly very excited about nuclear energy. We’re seeing it being done in service of AI,” Khlaaf said. “When you're looking at this rolling back of Nuclear Regulation and also this monopolization of nuclear energy to explicitly power AI, this raises a lot of serious concerns about whether the risk associated with nuclear facilities, in combination with the sort of these initiatives can be justified if they're not to the benefit of civil energy consumption.”

Matthew Wald, an independent nuclear energy analyst and former New York Times science journalist is more bullish on the use of AI in the nuclear energy field. Like Khlaaf, he also referenced the accident at Three Mile Island. “The tragedy of Three Mile Island was there was a badly designed control room, badly trained operators, and there was a control room indication that was very easy to misunderstand, and they misunderstood it, and it turned out that the same event had begun at another reactor. It was almost identical in Ohio, but that information was never shared, and the guys in Pennsylvania didn't know about it, so they wrecked a reactor,” Wald told 404 Media.

"AI is helpful, but let’s not get messianic about it.”


According to Wald, using AI to consolidate government databases full of nuclear regulatory information could have prevented that. “If you've got AI that can take data from one plant or from a set of plants, and it can arrange and organize that data in a way that's helpful to other plants, that's good news,” he said. “It could be good for safety. It could also just be good for efficiency. And certainly in licensing, it would be more efficient for both the licensee and the regulator if they had a clearer idea of precedent, of relevant other data.”

He also said that the nuclear industry is full of safety-minded engineers who triple check everything. “One of the virtues of people in this business is they are challenging and inquisitive and they want to check things. Whether or not they use computers as a tool, they’re still challenging and inquisitive and want to check things,” he said. “And I think anybody who uses AI unquestionably is asking for trouble, and I think the industry knows that…AI is helpful, but let’s not get messianic about it.”

But Khlaaf and Guerra are worried that the framing of nuclear power as a national security concern and the embrace of AI to speed up construction will setback the embrace of nuclear power. If nuclear isn’t safe, it’s not worth doing. “People seem to have lost sight of why nuclear regulation and safety thresholds exist to begin with. And the reason why nuclear risks, or civilian nuclear risk, were ever justified, was due to the capacity for nuclear power. To provide flexible civilian energy demands at low cost emissions in line with climate targets,” Khlaaf said.

“So when you move away from that…and you pull in the AI arms race into this cost benefit justification for risk proportionality, it leads government to sort of over index on these unproven benefits of AI as a reason to have nuclear risk, which ultimately undermines the risks of ionizing radiation to the general population, and also the increased risk of nuclear proliferation, which happens if you were to use AI like large language models in the licensing process.”


Newly released documents provide more details about ICE's plan to use bounty hunters and private investigators to find the location of undocumented immigrants.

Newly released documents provide more details about ICEx27;s plan to use bounty hunters and private investigators to find the location of undocumented immigrants.#ICE #bountyhunters


ICE Plans to Spend $180 Million on Bounty Hunters to Stalk Immigrants


Immigration and Customs Enforcement (ICE) is allocating as much as $180 million to pay bounty hunters and private investigators who verify the address and location of undocumented people ICE wishes to detain, including with physical surveillance, according to procurement records reviewed by 404 Media.

The documents provide more details about ICE’s plan to enlist the private sector to find deportation targets. In October The Intercept reported on ICE’s intention to use bounty hunters or skip tracers—an industry that often works on insurance fraud or tries to find people who skipped bail. The new documents now put a clear dollar amount on the scheme to essentially use private investigators to find the locations of undocumented immigrants.

💡
Do you know anything else about this plan? Are you a private investigator or skip tracer who plans to do this work? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

This post is for subscribers only


Become a member to get access to all content
Subscribe now


A fight against a massive AI data center; how people are 3D-printing whistles to fight ICE; and AI's war on knowledge.

A fight against a massive AI data center; how people are 3D-printing whistles to fight ICE; and AIx27;s war on knowledge.#Podcast


Podcast: Inside a Small Town's Fight Against a $1.2 Billion AI Datacenter


We start with Matthew Gault’s dive into a battle between a small town and the construction of a massive datacenter for America’s nuclear weapon scientists. After the break, Joseph explains why people are 3D-printing whistles in Chicago. In the subscribers-only section, Jason zooms out and tells us what librarians are seeing with AI and tech, and how that is impacting their work and knowledge more broadly.
playlist.megaphone.fm?e=TBIEA3…
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
youtube.com/embed/rHk580uKwHw?…
6:03 - ⁠Our New FOIA Forum! 11/19, 1PM ET⁠

7:50 - ⁠A Small Town Is Fighting a $1.2 Billion AI Datacenter for America's Nuclear Weapon Scientists⁠

12:27 - ⁠'A Black Hole of Energy Use': Meta's Massive AI Data Center Is Stressing Out a Louisiana Community⁠

21:09 - ⁠'House of Dynamite' Is About the Zoom Call that Ends the World⁠

30:35 - ⁠The Latest Defense Against ICE: 3D-Printed Whistles⁠

SUBSCRIBER'S STORY: ⁠AI Is Supercharging the War on Libraries, Education, and Human Knowledge⁠